2026-03-09 00:00:07.295380 | Job console starting 2026-03-09 00:00:07.356884 | Updating git repos 2026-03-09 00:00:07.651759 | Cloning repos into workspace 2026-03-09 00:00:07.880574 | Restoring repo states 2026-03-09 00:00:07.915403 | Merging changes 2026-03-09 00:00:07.915427 | Checking out repos 2026-03-09 00:00:08.288213 | Preparing playbooks 2026-03-09 00:00:09.333332 | Running Ansible setup 2026-03-09 00:00:16.981891 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-03-09 00:00:19.124387 | 2026-03-09 00:00:19.124568 | PLAY [Base pre] 2026-03-09 00:00:19.179274 | 2026-03-09 00:00:19.179449 | TASK [Setup log path fact] 2026-03-09 00:00:19.220727 | orchestrator | ok 2026-03-09 00:00:19.258643 | 2026-03-09 00:00:19.258908 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-09 00:00:19.317751 | orchestrator | ok 2026-03-09 00:00:19.338915 | 2026-03-09 00:00:19.339032 | TASK [emit-job-header : Print job information] 2026-03-09 00:00:19.433476 | # Job Information 2026-03-09 00:00:19.433679 | Ansible Version: 2.16.14 2026-03-09 00:00:19.433741 | Job: testbed-deploy-next-in-a-nutshell-with-tempest-ubuntu-24.04 2026-03-09 00:00:19.433775 | Pipeline: periodic-midnight 2026-03-09 00:00:19.433798 | Executor: 521e9411259a 2026-03-09 00:00:19.433819 | Triggered by: https://github.com/osism/testbed 2026-03-09 00:00:19.433841 | Event ID: ba3e5e257f914ab0a0c5d45d3402b562 2026-03-09 00:00:19.454810 | 2026-03-09 00:00:19.454957 | LOOP [emit-job-header : Print node information] 2026-03-09 00:00:19.810909 | orchestrator | ok: 2026-03-09 00:00:19.811074 | orchestrator | # Node Information 2026-03-09 00:00:19.811103 | orchestrator | Inventory Hostname: orchestrator 2026-03-09 00:00:19.811124 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-03-09 00:00:19.811143 | orchestrator | Username: zuul-testbed03 2026-03-09 00:00:19.811161 | orchestrator | Distro: Debian 12.13 2026-03-09 00:00:19.811181 | orchestrator | Provider: static-testbed 2026-03-09 00:00:19.811198 | orchestrator | Region: 2026-03-09 00:00:19.811215 | orchestrator | Label: testbed-orchestrator 2026-03-09 00:00:19.811231 | orchestrator | Product Name: OpenStack Nova 2026-03-09 00:00:19.811247 | orchestrator | Interface IP: 81.163.193.140 2026-03-09 00:00:19.830592 | 2026-03-09 00:00:19.830761 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-03-09 00:00:21.451250 | orchestrator -> localhost | changed 2026-03-09 00:00:21.459560 | 2026-03-09 00:00:21.459677 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-03-09 00:00:23.571185 | orchestrator -> localhost | changed 2026-03-09 00:00:23.596365 | 2026-03-09 00:00:23.596460 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-03-09 00:00:24.403212 | orchestrator -> localhost | ok 2026-03-09 00:00:24.408947 | 2026-03-09 00:00:24.409044 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-03-09 00:00:24.453250 | orchestrator | ok 2026-03-09 00:00:24.485522 | orchestrator | included: /var/lib/zuul/builds/e666ea591f8a46f2993184f9863979bf/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-03-09 00:00:24.503088 | 2026-03-09 00:00:24.503183 | TASK [add-build-sshkey : Create Temp SSH key] 2026-03-09 00:00:28.104931 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-03-09 00:00:28.105099 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/e666ea591f8a46f2993184f9863979bf/work/e666ea591f8a46f2993184f9863979bf_id_rsa 2026-03-09 00:00:28.105131 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/e666ea591f8a46f2993184f9863979bf/work/e666ea591f8a46f2993184f9863979bf_id_rsa.pub 2026-03-09 00:00:28.105153 | orchestrator -> localhost | The key fingerprint is: 2026-03-09 00:00:28.105253 | orchestrator -> localhost | SHA256:8mvfIkmcDemOjciFEs2owhSvGXTfS+Cm9ZfgEQqyJ5c zuul-build-sshkey 2026-03-09 00:00:28.105288 | orchestrator -> localhost | The key's randomart image is: 2026-03-09 00:00:28.105317 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-03-09 00:00:28.105337 | orchestrator -> localhost | | | 2026-03-09 00:00:28.105356 | orchestrator -> localhost | | + o . . | 2026-03-09 00:00:28.105373 | orchestrator -> localhost | |. *+= + .. | 2026-03-09 00:00:28.105390 | orchestrator -> localhost | | =oEo* =o | 2026-03-09 00:00:28.105407 | orchestrator -> localhost | |o.B.+.=oS+. | 2026-03-09 00:00:28.105430 | orchestrator -> localhost | |o+... .*=o. | 2026-03-09 00:00:28.105447 | orchestrator -> localhost | |. o o *o. | 2026-03-09 00:00:28.105562 | orchestrator -> localhost | | o o *... | 2026-03-09 00:00:28.105598 | orchestrator -> localhost | | ..o... | 2026-03-09 00:00:28.105627 | orchestrator -> localhost | +----[SHA256]-----+ 2026-03-09 00:00:28.105697 | orchestrator -> localhost | ok: Runtime: 0:00:02.447693 2026-03-09 00:00:28.113824 | 2026-03-09 00:00:28.113908 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-03-09 00:00:28.205962 | orchestrator | ok 2026-03-09 00:00:28.224146 | orchestrator | included: /var/lib/zuul/builds/e666ea591f8a46f2993184f9863979bf/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-03-09 00:00:28.241125 | 2026-03-09 00:00:28.241218 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-03-09 00:00:28.273845 | orchestrator | skipping: Conditional result was False 2026-03-09 00:00:28.280795 | 2026-03-09 00:00:28.280892 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-03-09 00:00:29.073734 | orchestrator | changed 2026-03-09 00:00:29.084255 | 2026-03-09 00:00:29.084374 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-03-09 00:00:29.426205 | orchestrator | ok 2026-03-09 00:00:29.452776 | 2026-03-09 00:00:29.452893 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-03-09 00:00:29.989170 | orchestrator | ok 2026-03-09 00:00:30.000181 | 2026-03-09 00:00:30.000272 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-03-09 00:00:30.506300 | orchestrator | ok 2026-03-09 00:00:30.512630 | 2026-03-09 00:00:30.512724 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-03-09 00:00:30.552918 | orchestrator | skipping: Conditional result was False 2026-03-09 00:00:30.571366 | 2026-03-09 00:00:30.571465 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-03-09 00:00:31.874485 | orchestrator -> localhost | changed 2026-03-09 00:00:31.885353 | 2026-03-09 00:00:31.885446 | TASK [add-build-sshkey : Add back temp key] 2026-03-09 00:00:32.908628 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/e666ea591f8a46f2993184f9863979bf/work/e666ea591f8a46f2993184f9863979bf_id_rsa (zuul-build-sshkey) 2026-03-09 00:00:32.908872 | orchestrator -> localhost | ok: Runtime: 0:00:00.039310 2026-03-09 00:00:32.914818 | 2026-03-09 00:00:32.914931 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-03-09 00:00:33.671324 | orchestrator | ok 2026-03-09 00:00:33.676133 | 2026-03-09 00:00:33.676211 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-03-09 00:00:33.729069 | orchestrator | skipping: Conditional result was False 2026-03-09 00:00:33.875661 | 2026-03-09 00:00:33.875806 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-03-09 00:00:34.532388 | orchestrator | ok 2026-03-09 00:00:34.552996 | 2026-03-09 00:00:34.553113 | TASK [validate-host : Define zuul_info_dir fact] 2026-03-09 00:00:34.604079 | orchestrator | ok 2026-03-09 00:00:34.635610 | 2026-03-09 00:00:34.635761 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-03-09 00:00:35.439312 | orchestrator -> localhost | ok 2026-03-09 00:00:35.459151 | 2026-03-09 00:00:35.459262 | TASK [validate-host : Collect information about the host] 2026-03-09 00:00:37.253851 | orchestrator | ok 2026-03-09 00:00:37.301228 | 2026-03-09 00:00:37.301348 | TASK [validate-host : Sanitize hostname] 2026-03-09 00:00:37.431247 | orchestrator | ok 2026-03-09 00:00:37.440751 | 2026-03-09 00:00:37.440855 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-03-09 00:00:39.387161 | orchestrator -> localhost | changed 2026-03-09 00:00:39.392221 | 2026-03-09 00:00:39.392310 | TASK [validate-host : Collect information about zuul worker] 2026-03-09 00:00:40.107369 | orchestrator | ok 2026-03-09 00:00:40.111877 | 2026-03-09 00:00:40.111964 | TASK [validate-host : Write out all zuul information for each host] 2026-03-09 00:00:41.469910 | orchestrator -> localhost | changed 2026-03-09 00:00:41.479057 | 2026-03-09 00:00:41.479151 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-03-09 00:00:41.780851 | orchestrator | ok 2026-03-09 00:00:41.786440 | 2026-03-09 00:00:41.786517 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-03-09 00:02:05.336695 | orchestrator | changed: 2026-03-09 00:02:05.336948 | orchestrator | .d..t...... src/ 2026-03-09 00:02:05.336984 | orchestrator | .d..t...... src/github.com/ 2026-03-09 00:02:05.337009 | orchestrator | .d..t...... src/github.com/osism/ 2026-03-09 00:02:05.337031 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-03-09 00:02:05.337051 | orchestrator | RedHat.yml 2026-03-09 00:02:05.351917 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-03-09 00:02:05.351935 | orchestrator | RedHat.yml 2026-03-09 00:02:05.352050 | orchestrator | = 1.53.0"... 2026-03-09 00:02:16.525806 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-03-09 00:02:16.544101 | orchestrator | - Finding latest version of hashicorp/null... 2026-03-09 00:02:16.688565 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-03-09 00:02:17.440892 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-03-09 00:02:17.508515 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-03-09 00:02:18.027917 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-03-09 00:02:18.095196 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-03-09 00:02:18.549555 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-03-09 00:02:18.549616 | orchestrator | 2026-03-09 00:02:18.549623 | orchestrator | Providers are signed by their developers. 2026-03-09 00:02:18.549629 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-03-09 00:02:18.549634 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-03-09 00:02:18.549640 | orchestrator | 2026-03-09 00:02:18.549645 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-03-09 00:02:18.549649 | orchestrator | selections it made above. Include this file in your version control repository 2026-03-09 00:02:18.549663 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-03-09 00:02:18.549667 | orchestrator | you run "tofu init" in the future. 2026-03-09 00:02:18.550082 | orchestrator | 2026-03-09 00:02:18.550103 | orchestrator | OpenTofu has been successfully initialized! 2026-03-09 00:02:18.550122 | orchestrator | 2026-03-09 00:02:18.550127 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-03-09 00:02:18.550135 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-03-09 00:02:18.550140 | orchestrator | should now work. 2026-03-09 00:02:18.550143 | orchestrator | 2026-03-09 00:02:18.550147 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-03-09 00:02:18.550152 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-03-09 00:02:18.550156 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-03-09 00:02:18.714105 | orchestrator | Created and switched to workspace "ci"! 2026-03-09 00:02:18.714195 | orchestrator | 2026-03-09 00:02:18.714204 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-03-09 00:02:18.714210 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-03-09 00:02:18.714215 | orchestrator | for this configuration. 2026-03-09 00:02:18.802860 | orchestrator | ci.auto.tfvars 2026-03-09 00:02:19.188195 | orchestrator | default_custom.tf 2026-03-09 00:02:24.309165 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-03-09 00:02:24.881483 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-03-09 00:02:25.131028 | orchestrator | 2026-03-09 00:02:25.131123 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-03-09 00:02:25.131137 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-03-09 00:02:25.131161 | orchestrator | + create 2026-03-09 00:02:25.131170 | orchestrator | <= read (data resources) 2026-03-09 00:02:25.131178 | orchestrator | 2026-03-09 00:02:25.131186 | orchestrator | OpenTofu will perform the following actions: 2026-03-09 00:02:25.131194 | orchestrator | 2026-03-09 00:02:25.131202 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-03-09 00:02:25.131210 | orchestrator | # (config refers to values not yet known) 2026-03-09 00:02:25.131217 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-03-09 00:02:25.131225 | orchestrator | + checksum = (known after apply) 2026-03-09 00:02:25.131232 | orchestrator | + created_at = (known after apply) 2026-03-09 00:02:25.131240 | orchestrator | + file = (known after apply) 2026-03-09 00:02:25.131247 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.131278 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:25.131286 | orchestrator | + min_disk_gb = (known after apply) 2026-03-09 00:02:25.131294 | orchestrator | + min_ram_mb = (known after apply) 2026-03-09 00:02:25.131301 | orchestrator | + most_recent = true 2026-03-09 00:02:25.131309 | orchestrator | + name = (known after apply) 2026-03-09 00:02:25.131316 | orchestrator | + protected = (known after apply) 2026-03-09 00:02:25.131323 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.131334 | orchestrator | + schema = (known after apply) 2026-03-09 00:02:25.131341 | orchestrator | + size_bytes = (known after apply) 2026-03-09 00:02:25.131349 | orchestrator | + tags = (known after apply) 2026-03-09 00:02:25.131357 | orchestrator | + updated_at = (known after apply) 2026-03-09 00:02:25.131364 | orchestrator | } 2026-03-09 00:02:25.131375 | orchestrator | 2026-03-09 00:02:25.131382 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-03-09 00:02:25.131390 | orchestrator | # (config refers to values not yet known) 2026-03-09 00:02:25.131397 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-03-09 00:02:25.131405 | orchestrator | + checksum = (known after apply) 2026-03-09 00:02:25.131412 | orchestrator | + created_at = (known after apply) 2026-03-09 00:02:25.131418 | orchestrator | + file = (known after apply) 2026-03-09 00:02:25.131424 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.131430 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:25.131436 | orchestrator | + min_disk_gb = (known after apply) 2026-03-09 00:02:25.131442 | orchestrator | + min_ram_mb = (known after apply) 2026-03-09 00:02:25.131448 | orchestrator | + most_recent = true 2026-03-09 00:02:25.131454 | orchestrator | + name = (known after apply) 2026-03-09 00:02:25.131460 | orchestrator | + protected = (known after apply) 2026-03-09 00:02:25.131466 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.131472 | orchestrator | + schema = (known after apply) 2026-03-09 00:02:25.131479 | orchestrator | + size_bytes = (known after apply) 2026-03-09 00:02:25.131485 | orchestrator | + tags = (known after apply) 2026-03-09 00:02:25.131492 | orchestrator | + updated_at = (known after apply) 2026-03-09 00:02:25.131498 | orchestrator | } 2026-03-09 00:02:25.131505 | orchestrator | 2026-03-09 00:02:25.131512 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-03-09 00:02:25.131519 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-03-09 00:02:25.131526 | orchestrator | + content = (known after apply) 2026-03-09 00:02:25.131533 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-09 00:02:25.131539 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-09 00:02:25.131581 | orchestrator | + content_md5 = (known after apply) 2026-03-09 00:02:25.131589 | orchestrator | + content_sha1 = (known after apply) 2026-03-09 00:02:25.131596 | orchestrator | + content_sha256 = (known after apply) 2026-03-09 00:02:25.131603 | orchestrator | + content_sha512 = (known after apply) 2026-03-09 00:02:25.131617 | orchestrator | + directory_permission = "0777" 2026-03-09 00:02:25.131624 | orchestrator | + file_permission = "0644" 2026-03-09 00:02:25.131632 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-03-09 00:02:25.131639 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.131647 | orchestrator | } 2026-03-09 00:02:25.131654 | orchestrator | 2026-03-09 00:02:25.131661 | orchestrator | # local_file.id_rsa_pub will be created 2026-03-09 00:02:25.131668 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-03-09 00:02:25.131699 | orchestrator | + content = (known after apply) 2026-03-09 00:02:25.131706 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-09 00:02:25.131713 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-09 00:02:25.131721 | orchestrator | + content_md5 = (known after apply) 2026-03-09 00:02:25.131728 | orchestrator | + content_sha1 = (known after apply) 2026-03-09 00:02:25.131735 | orchestrator | + content_sha256 = (known after apply) 2026-03-09 00:02:25.131775 | orchestrator | + content_sha512 = (known after apply) 2026-03-09 00:02:25.131800 | orchestrator | + directory_permission = "0777" 2026-03-09 00:02:25.131807 | orchestrator | + file_permission = "0644" 2026-03-09 00:02:25.131821 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-03-09 00:02:25.131827 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.131834 | orchestrator | } 2026-03-09 00:02:25.131840 | orchestrator | 2026-03-09 00:02:25.131853 | orchestrator | # local_file.inventory will be created 2026-03-09 00:02:25.131860 | orchestrator | + resource "local_file" "inventory" { 2026-03-09 00:02:25.131866 | orchestrator | + content = (known after apply) 2026-03-09 00:02:25.131873 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-09 00:02:25.131879 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-09 00:02:25.131886 | orchestrator | + content_md5 = (known after apply) 2026-03-09 00:02:25.131892 | orchestrator | + content_sha1 = (known after apply) 2026-03-09 00:02:25.131899 | orchestrator | + content_sha256 = (known after apply) 2026-03-09 00:02:25.131906 | orchestrator | + content_sha512 = (known after apply) 2026-03-09 00:02:25.131912 | orchestrator | + directory_permission = "0777" 2026-03-09 00:02:25.131919 | orchestrator | + file_permission = "0644" 2026-03-09 00:02:25.131926 | orchestrator | + filename = "inventory.ci" 2026-03-09 00:02:25.131932 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.131939 | orchestrator | } 2026-03-09 00:02:25.131949 | orchestrator | 2026-03-09 00:02:25.131956 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-03-09 00:02:25.131963 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-03-09 00:02:25.131969 | orchestrator | + content = (sensitive value) 2026-03-09 00:02:25.131976 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-09 00:02:25.131982 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-09 00:02:25.131989 | orchestrator | + content_md5 = (known after apply) 2026-03-09 00:02:25.131995 | orchestrator | + content_sha1 = (known after apply) 2026-03-09 00:02:25.132002 | orchestrator | + content_sha256 = (known after apply) 2026-03-09 00:02:25.132008 | orchestrator | + content_sha512 = (known after apply) 2026-03-09 00:02:25.132015 | orchestrator | + directory_permission = "0700" 2026-03-09 00:02:25.132021 | orchestrator | + file_permission = "0600" 2026-03-09 00:02:25.132028 | orchestrator | + filename = ".id_rsa.ci" 2026-03-09 00:02:25.132034 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.132041 | orchestrator | } 2026-03-09 00:02:25.132047 | orchestrator | 2026-03-09 00:02:25.132054 | orchestrator | # null_resource.node_semaphore will be created 2026-03-09 00:02:25.132061 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-03-09 00:02:25.132067 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.132074 | orchestrator | } 2026-03-09 00:02:25.132080 | orchestrator | 2026-03-09 00:02:25.132087 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-03-09 00:02:25.132094 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-03-09 00:02:25.132100 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:25.132107 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.132113 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.132120 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:25.132127 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:25.132133 | orchestrator | + name = "testbed-volume-manager-base" 2026-03-09 00:02:25.132139 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.132146 | orchestrator | + size = 80 2026-03-09 00:02:25.132153 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:25.132159 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:25.132166 | orchestrator | } 2026-03-09 00:02:25.132172 | orchestrator | 2026-03-09 00:02:25.132179 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-03-09 00:02:25.132185 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-09 00:02:25.132192 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:25.132199 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.132205 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.132224 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:25.132231 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:25.132238 | orchestrator | + name = "testbed-volume-0-node-base" 2026-03-09 00:02:25.132244 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.132251 | orchestrator | + size = 80 2026-03-09 00:02:25.132257 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:25.132264 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:25.132270 | orchestrator | } 2026-03-09 00:02:25.132277 | orchestrator | 2026-03-09 00:02:25.132283 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-03-09 00:02:25.132290 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-09 00:02:25.132297 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:25.132303 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.132309 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.132316 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:25.132322 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:25.132329 | orchestrator | + name = "testbed-volume-1-node-base" 2026-03-09 00:02:25.132335 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.132342 | orchestrator | + size = 80 2026-03-09 00:02:25.132349 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:25.132355 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:25.132362 | orchestrator | } 2026-03-09 00:02:25.132368 | orchestrator | 2026-03-09 00:02:25.132375 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-03-09 00:02:25.132381 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-09 00:02:25.132388 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:25.132395 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.132401 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.132407 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:25.132414 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:25.132421 | orchestrator | + name = "testbed-volume-2-node-base" 2026-03-09 00:02:25.132427 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.132434 | orchestrator | + size = 80 2026-03-09 00:02:25.132440 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:25.132447 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:25.132453 | orchestrator | } 2026-03-09 00:02:25.132460 | orchestrator | 2026-03-09 00:02:25.132466 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-03-09 00:02:25.132473 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-09 00:02:25.132479 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:25.132486 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.132493 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.132499 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:25.132506 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:25.132515 | orchestrator | + name = "testbed-volume-3-node-base" 2026-03-09 00:02:25.132522 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.132529 | orchestrator | + size = 80 2026-03-09 00:02:25.132535 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:25.132542 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:25.132548 | orchestrator | } 2026-03-09 00:02:25.132555 | orchestrator | 2026-03-09 00:02:25.132561 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-03-09 00:02:25.132568 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-09 00:02:25.132575 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:25.132581 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.132592 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.132603 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:25.132609 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:25.132616 | orchestrator | + name = "testbed-volume-4-node-base" 2026-03-09 00:02:25.132622 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.132629 | orchestrator | + size = 80 2026-03-09 00:02:25.132636 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:25.132642 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:25.132649 | orchestrator | } 2026-03-09 00:02:25.132655 | orchestrator | 2026-03-09 00:02:25.132662 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-03-09 00:02:25.132668 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-09 00:02:25.132675 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:25.132681 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.132688 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.132694 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:25.132701 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:25.132707 | orchestrator | + name = "testbed-volume-5-node-base" 2026-03-09 00:02:25.132714 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.132720 | orchestrator | + size = 80 2026-03-09 00:02:25.132727 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:25.132733 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:25.132740 | orchestrator | } 2026-03-09 00:02:25.132758 | orchestrator | 2026-03-09 00:02:25.132764 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-03-09 00:02:25.132770 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-09 00:02:25.132776 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:25.132783 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.132789 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.132796 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:25.132802 | orchestrator | + name = "testbed-volume-0-node-3" 2026-03-09 00:02:25.132809 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.132815 | orchestrator | + size = 20 2026-03-09 00:02:25.132822 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:25.132828 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:25.132835 | orchestrator | } 2026-03-09 00:02:25.132841 | orchestrator | 2026-03-09 00:02:25.132848 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-03-09 00:02:25.132855 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-09 00:02:25.132861 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:25.132868 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.132874 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.132881 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:25.132887 | orchestrator | + name = "testbed-volume-1-node-4" 2026-03-09 00:02:25.132893 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.132900 | orchestrator | + size = 20 2026-03-09 00:02:25.132914 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:25.132921 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:25.132927 | orchestrator | } 2026-03-09 00:02:25.132934 | orchestrator | 2026-03-09 00:02:25.132941 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-03-09 00:02:25.132947 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-09 00:02:25.132954 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:25.132960 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.132967 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.132973 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:25.132978 | orchestrator | + name = "testbed-volume-2-node-5" 2026-03-09 00:02:25.132984 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.132995 | orchestrator | + size = 20 2026-03-09 00:02:25.133001 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:25.133007 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:25.133013 | orchestrator | } 2026-03-09 00:02:25.133019 | orchestrator | 2026-03-09 00:02:25.133026 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-03-09 00:02:25.133032 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-09 00:02:25.133038 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:25.133044 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.133050 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.133056 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:25.133062 | orchestrator | + name = "testbed-volume-3-node-3" 2026-03-09 00:02:25.133068 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.133074 | orchestrator | + size = 20 2026-03-09 00:02:25.133080 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:25.133087 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:25.133093 | orchestrator | } 2026-03-09 00:02:25.133099 | orchestrator | 2026-03-09 00:02:25.133105 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-03-09 00:02:25.133111 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-09 00:02:25.133114 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:25.133119 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.133126 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.133132 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:25.133138 | orchestrator | + name = "testbed-volume-4-node-4" 2026-03-09 00:02:25.133144 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.133154 | orchestrator | + size = 20 2026-03-09 00:02:25.133160 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:25.133167 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:25.133173 | orchestrator | } 2026-03-09 00:02:25.133179 | orchestrator | 2026-03-09 00:02:25.133185 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-03-09 00:02:25.133191 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-09 00:02:25.133197 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:25.133203 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.133209 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.133215 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:25.133221 | orchestrator | + name = "testbed-volume-5-node-5" 2026-03-09 00:02:25.133231 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.133237 | orchestrator | + size = 20 2026-03-09 00:02:25.133243 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:25.133249 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:25.133255 | orchestrator | } 2026-03-09 00:02:25.133261 | orchestrator | 2026-03-09 00:02:25.133266 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-03-09 00:02:25.133272 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-09 00:02:25.133278 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:25.133283 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.133289 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.133295 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:25.133302 | orchestrator | + name = "testbed-volume-6-node-3" 2026-03-09 00:02:25.133308 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.133314 | orchestrator | + size = 20 2026-03-09 00:02:25.133320 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:25.133326 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:25.133332 | orchestrator | } 2026-03-09 00:02:25.133338 | orchestrator | 2026-03-09 00:02:25.133344 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-03-09 00:02:25.133350 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-09 00:02:25.133361 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:25.133367 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.133373 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.133379 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:25.133386 | orchestrator | + name = "testbed-volume-7-node-4" 2026-03-09 00:02:25.133392 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.133398 | orchestrator | + size = 20 2026-03-09 00:02:25.133404 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:25.133410 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:25.133416 | orchestrator | } 2026-03-09 00:02:25.133422 | orchestrator | 2026-03-09 00:02:25.133427 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-03-09 00:02:25.133433 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-09 00:02:25.133439 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:25.133445 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.133452 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.133458 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:25.133464 | orchestrator | + name = "testbed-volume-8-node-5" 2026-03-09 00:02:25.133470 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.133476 | orchestrator | + size = 20 2026-03-09 00:02:25.133482 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:25.133488 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:25.133494 | orchestrator | } 2026-03-09 00:02:25.133501 | orchestrator | 2026-03-09 00:02:25.133506 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-03-09 00:02:25.133512 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-03-09 00:02:25.133519 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-09 00:02:25.133525 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-09 00:02:25.133531 | orchestrator | + all_metadata = (known after apply) 2026-03-09 00:02:25.133537 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:25.133543 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.133548 | orchestrator | + config_drive = true 2026-03-09 00:02:25.133554 | orchestrator | + created = (known after apply) 2026-03-09 00:02:25.133560 | orchestrator | + flavor_id = (known after apply) 2026-03-09 00:02:25.133566 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-03-09 00:02:25.133572 | orchestrator | + force_delete = false 2026-03-09 00:02:25.133578 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-09 00:02:25.133584 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.133590 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:25.133596 | orchestrator | + image_name = (known after apply) 2026-03-09 00:02:25.133602 | orchestrator | + key_pair = "testbed" 2026-03-09 00:02:25.133608 | orchestrator | + name = "testbed-manager" 2026-03-09 00:02:25.133613 | orchestrator | + power_state = "active" 2026-03-09 00:02:25.133619 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.133625 | orchestrator | + security_groups = (known after apply) 2026-03-09 00:02:25.133631 | orchestrator | + stop_before_destroy = false 2026-03-09 00:02:25.133637 | orchestrator | + updated = (known after apply) 2026-03-09 00:02:25.133644 | orchestrator | + user_data = (sensitive value) 2026-03-09 00:02:25.133651 | orchestrator | 2026-03-09 00:02:25.133657 | orchestrator | + block_device { 2026-03-09 00:02:25.133664 | orchestrator | + boot_index = 0 2026-03-09 00:02:25.133670 | orchestrator | + delete_on_termination = false 2026-03-09 00:02:25.133680 | orchestrator | + destination_type = "volume" 2026-03-09 00:02:25.133687 | orchestrator | + multiattach = false 2026-03-09 00:02:25.133693 | orchestrator | + source_type = "volume" 2026-03-09 00:02:25.133698 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:25.133708 | orchestrator | } 2026-03-09 00:02:25.133715 | orchestrator | 2026-03-09 00:02:25.133722 | orchestrator | + network { 2026-03-09 00:02:25.133728 | orchestrator | + access_network = false 2026-03-09 00:02:25.133734 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-09 00:02:25.133740 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-09 00:02:25.133779 | orchestrator | + mac = (known after apply) 2026-03-09 00:02:25.133785 | orchestrator | + name = (known after apply) 2026-03-09 00:02:25.133792 | orchestrator | + port = (known after apply) 2026-03-09 00:02:25.133798 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:25.133805 | orchestrator | } 2026-03-09 00:02:25.133811 | orchestrator | } 2026-03-09 00:02:25.133817 | orchestrator | 2026-03-09 00:02:25.133823 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-03-09 00:02:25.133829 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-09 00:02:25.133835 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-09 00:02:25.133841 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-09 00:02:25.133846 | orchestrator | + all_metadata = (known after apply) 2026-03-09 00:02:25.133852 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:25.133858 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.133864 | orchestrator | + config_drive = true 2026-03-09 00:02:25.133870 | orchestrator | + created = (known after apply) 2026-03-09 00:02:25.133880 | orchestrator | + flavor_id = (known after apply) 2026-03-09 00:02:25.133887 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-09 00:02:25.133892 | orchestrator | + force_delete = false 2026-03-09 00:02:25.133899 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-09 00:02:25.133905 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.133912 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:25.133918 | orchestrator | + image_name = (known after apply) 2026-03-09 00:02:25.133924 | orchestrator | + key_pair = "testbed" 2026-03-09 00:02:25.133929 | orchestrator | + name = "testbed-node-0" 2026-03-09 00:02:25.133936 | orchestrator | + power_state = "active" 2026-03-09 00:02:25.133942 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.133948 | orchestrator | + security_groups = (known after apply) 2026-03-09 00:02:25.133954 | orchestrator | + stop_before_destroy = false 2026-03-09 00:02:25.133960 | orchestrator | + updated = (known after apply) 2026-03-09 00:02:25.133965 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-09 00:02:25.133972 | orchestrator | 2026-03-09 00:02:25.133978 | orchestrator | + block_device { 2026-03-09 00:02:25.133984 | orchestrator | + boot_index = 0 2026-03-09 00:02:25.133990 | orchestrator | + delete_on_termination = false 2026-03-09 00:02:25.133995 | orchestrator | + destination_type = "volume" 2026-03-09 00:02:25.134002 | orchestrator | + multiattach = false 2026-03-09 00:02:25.134008 | orchestrator | + source_type = "volume" 2026-03-09 00:02:25.134035 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:25.134042 | orchestrator | } 2026-03-09 00:02:25.134047 | orchestrator | 2026-03-09 00:02:25.134054 | orchestrator | + network { 2026-03-09 00:02:25.134061 | orchestrator | + access_network = false 2026-03-09 00:02:25.134067 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-09 00:02:25.134073 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-09 00:02:25.134080 | orchestrator | + mac = (known after apply) 2026-03-09 00:02:25.134086 | orchestrator | + name = (known after apply) 2026-03-09 00:02:25.134093 | orchestrator | + port = (known after apply) 2026-03-09 00:02:25.134100 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:25.134107 | orchestrator | } 2026-03-09 00:02:25.134114 | orchestrator | } 2026-03-09 00:02:25.134121 | orchestrator | 2026-03-09 00:02:25.134126 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-03-09 00:02:25.134132 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-09 00:02:25.134139 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-09 00:02:25.134150 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-09 00:02:25.134156 | orchestrator | + all_metadata = (known after apply) 2026-03-09 00:02:25.134163 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:25.134169 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.134175 | orchestrator | + config_drive = true 2026-03-09 00:02:25.134182 | orchestrator | + created = (known after apply) 2026-03-09 00:02:25.134189 | orchestrator | + flavor_id = (known after apply) 2026-03-09 00:02:25.134196 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-09 00:02:25.134202 | orchestrator | + force_delete = false 2026-03-09 00:02:25.134208 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-09 00:02:25.134215 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.134222 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:25.134229 | orchestrator | + image_name = (known after apply) 2026-03-09 00:02:25.134235 | orchestrator | + key_pair = "testbed" 2026-03-09 00:02:25.134241 | orchestrator | + name = "testbed-node-1" 2026-03-09 00:02:25.134247 | orchestrator | + power_state = "active" 2026-03-09 00:02:25.134254 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.134260 | orchestrator | + security_groups = (known after apply) 2026-03-09 00:02:25.134267 | orchestrator | + stop_before_destroy = false 2026-03-09 00:02:25.134275 | orchestrator | + updated = (known after apply) 2026-03-09 00:02:25.134281 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-09 00:02:25.134287 | orchestrator | 2026-03-09 00:02:25.134293 | orchestrator | + block_device { 2026-03-09 00:02:25.134299 | orchestrator | + boot_index = 0 2026-03-09 00:02:25.134306 | orchestrator | + delete_on_termination = false 2026-03-09 00:02:25.134312 | orchestrator | + destination_type = "volume" 2026-03-09 00:02:25.134319 | orchestrator | + multiattach = false 2026-03-09 00:02:25.134325 | orchestrator | + source_type = "volume" 2026-03-09 00:02:25.134331 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:25.134338 | orchestrator | } 2026-03-09 00:02:25.134344 | orchestrator | 2026-03-09 00:02:25.134350 | orchestrator | + network { 2026-03-09 00:02:25.134357 | orchestrator | + access_network = false 2026-03-09 00:02:25.134363 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-09 00:02:25.134369 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-09 00:02:25.134376 | orchestrator | + mac = (known after apply) 2026-03-09 00:02:25.134382 | orchestrator | + name = (known after apply) 2026-03-09 00:02:25.134389 | orchestrator | + port = (known after apply) 2026-03-09 00:02:25.134394 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:25.134401 | orchestrator | } 2026-03-09 00:02:25.134408 | orchestrator | } 2026-03-09 00:02:25.134414 | orchestrator | 2026-03-09 00:02:25.134420 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-03-09 00:02:25.134427 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-09 00:02:25.134434 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-09 00:02:25.134440 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-09 00:02:25.134447 | orchestrator | + all_metadata = (known after apply) 2026-03-09 00:02:25.134454 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:25.134464 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.134470 | orchestrator | + config_drive = true 2026-03-09 00:02:25.134477 | orchestrator | + created = (known after apply) 2026-03-09 00:02:25.134483 | orchestrator | + flavor_id = (known after apply) 2026-03-09 00:02:25.134490 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-09 00:02:25.134498 | orchestrator | + force_delete = false 2026-03-09 00:02:25.134504 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-09 00:02:25.134510 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.134516 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:25.134526 | orchestrator | + image_name = (known after apply) 2026-03-09 00:02:25.134533 | orchestrator | + key_pair = "testbed" 2026-03-09 00:02:25.134540 | orchestrator | + name = "testbed-node-2" 2026-03-09 00:02:25.134546 | orchestrator | + power_state = "active" 2026-03-09 00:02:25.134556 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.134562 | orchestrator | + security_groups = (known after apply) 2026-03-09 00:02:25.134568 | orchestrator | + stop_before_destroy = false 2026-03-09 00:02:25.134574 | orchestrator | + updated = (known after apply) 2026-03-09 00:02:25.134580 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-09 00:02:25.134587 | orchestrator | 2026-03-09 00:02:25.134593 | orchestrator | + block_device { 2026-03-09 00:02:25.134600 | orchestrator | + boot_index = 0 2026-03-09 00:02:25.134607 | orchestrator | + delete_on_termination = false 2026-03-09 00:02:25.134613 | orchestrator | + destination_type = "volume" 2026-03-09 00:02:25.134619 | orchestrator | + multiattach = false 2026-03-09 00:02:25.134625 | orchestrator | + source_type = "volume" 2026-03-09 00:02:25.134632 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:25.134638 | orchestrator | } 2026-03-09 00:02:25.134645 | orchestrator | 2026-03-09 00:02:25.134652 | orchestrator | + network { 2026-03-09 00:02:25.134659 | orchestrator | + access_network = false 2026-03-09 00:02:25.134665 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-09 00:02:25.134671 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-09 00:02:25.134677 | orchestrator | + mac = (known after apply) 2026-03-09 00:02:25.134684 | orchestrator | + name = (known after apply) 2026-03-09 00:02:25.134690 | orchestrator | + port = (known after apply) 2026-03-09 00:02:25.134696 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:25.134702 | orchestrator | } 2026-03-09 00:02:25.134709 | orchestrator | } 2026-03-09 00:02:25.134715 | orchestrator | 2026-03-09 00:02:25.134722 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-03-09 00:02:25.134729 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-09 00:02:25.134735 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-09 00:02:25.134795 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-09 00:02:25.134805 | orchestrator | + all_metadata = (known after apply) 2026-03-09 00:02:25.134812 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:25.134818 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.134825 | orchestrator | + config_drive = true 2026-03-09 00:02:25.134832 | orchestrator | + created = (known after apply) 2026-03-09 00:02:25.134839 | orchestrator | + flavor_id = (known after apply) 2026-03-09 00:02:25.134844 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-09 00:02:25.134851 | orchestrator | + force_delete = false 2026-03-09 00:02:25.134857 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-09 00:02:25.134864 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.134869 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:25.134876 | orchestrator | + image_name = (known after apply) 2026-03-09 00:02:25.134883 | orchestrator | + key_pair = "testbed" 2026-03-09 00:02:25.134889 | orchestrator | + name = "testbed-node-3" 2026-03-09 00:02:25.134895 | orchestrator | + power_state = "active" 2026-03-09 00:02:25.134901 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.134907 | orchestrator | + security_groups = (known after apply) 2026-03-09 00:02:25.134913 | orchestrator | + stop_before_destroy = false 2026-03-09 00:02:25.134919 | orchestrator | + updated = (known after apply) 2026-03-09 00:02:25.134925 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-09 00:02:25.134931 | orchestrator | 2026-03-09 00:02:25.134938 | orchestrator | + block_device { 2026-03-09 00:02:25.134947 | orchestrator | + boot_index = 0 2026-03-09 00:02:25.134953 | orchestrator | + delete_on_termination = false 2026-03-09 00:02:25.134959 | orchestrator | + destination_type = "volume" 2026-03-09 00:02:25.134970 | orchestrator | + multiattach = false 2026-03-09 00:02:25.134976 | orchestrator | + source_type = "volume" 2026-03-09 00:02:25.134982 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:25.134988 | orchestrator | } 2026-03-09 00:02:25.134995 | orchestrator | 2026-03-09 00:02:25.135002 | orchestrator | + network { 2026-03-09 00:02:25.135008 | orchestrator | + access_network = false 2026-03-09 00:02:25.135014 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-09 00:02:25.135021 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-09 00:02:25.135027 | orchestrator | + mac = (known after apply) 2026-03-09 00:02:25.135033 | orchestrator | + name = (known after apply) 2026-03-09 00:02:25.135040 | orchestrator | + port = (known after apply) 2026-03-09 00:02:25.135046 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:25.135053 | orchestrator | } 2026-03-09 00:02:25.135060 | orchestrator | } 2026-03-09 00:02:25.135066 | orchestrator | 2026-03-09 00:02:25.135073 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-03-09 00:02:25.135079 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-09 00:02:25.135086 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-09 00:02:25.135092 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-09 00:02:25.135099 | orchestrator | + all_metadata = (known after apply) 2026-03-09 00:02:25.135106 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:25.135112 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.135118 | orchestrator | + config_drive = true 2026-03-09 00:02:25.135125 | orchestrator | + created = (known after apply) 2026-03-09 00:02:25.135131 | orchestrator | + flavor_id = (known after apply) 2026-03-09 00:02:25.135137 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-09 00:02:25.135144 | orchestrator | + force_delete = false 2026-03-09 00:02:25.135150 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-09 00:02:25.135156 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.135163 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:25.135169 | orchestrator | + image_name = (known after apply) 2026-03-09 00:02:25.135176 | orchestrator | + key_pair = "testbed" 2026-03-09 00:02:25.135183 | orchestrator | + name = "testbed-node-4" 2026-03-09 00:02:25.135190 | orchestrator | + power_state = "active" 2026-03-09 00:02:25.135196 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.135203 | orchestrator | + security_groups = (known after apply) 2026-03-09 00:02:25.135209 | orchestrator | + stop_before_destroy = false 2026-03-09 00:02:25.135215 | orchestrator | + updated = (known after apply) 2026-03-09 00:02:25.135221 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-09 00:02:25.135228 | orchestrator | 2026-03-09 00:02:25.135234 | orchestrator | + block_device { 2026-03-09 00:02:25.135240 | orchestrator | + boot_index = 0 2026-03-09 00:02:25.135246 | orchestrator | + delete_on_termination = false 2026-03-09 00:02:25.135253 | orchestrator | + destination_type = "volume" 2026-03-09 00:02:25.135259 | orchestrator | + multiattach = false 2026-03-09 00:02:25.135269 | orchestrator | + source_type = "volume" 2026-03-09 00:02:25.135276 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:25.135282 | orchestrator | } 2026-03-09 00:02:25.135288 | orchestrator | 2026-03-09 00:02:25.135294 | orchestrator | + network { 2026-03-09 00:02:25.135299 | orchestrator | + access_network = false 2026-03-09 00:02:25.135305 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-09 00:02:25.135311 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-09 00:02:25.135318 | orchestrator | + mac = (known after apply) 2026-03-09 00:02:25.135325 | orchestrator | + name = (known after apply) 2026-03-09 00:02:25.135330 | orchestrator | + port = (known after apply) 2026-03-09 00:02:25.135336 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:25.135343 | orchestrator | } 2026-03-09 00:02:25.135350 | orchestrator | } 2026-03-09 00:02:25.135361 | orchestrator | 2026-03-09 00:02:25.135367 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-03-09 00:02:25.135374 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-09 00:02:25.135380 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-09 00:02:25.135387 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-09 00:02:25.135393 | orchestrator | + all_metadata = (known after apply) 2026-03-09 00:02:25.135400 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:25.135407 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.135413 | orchestrator | + config_drive = true 2026-03-09 00:02:25.135419 | orchestrator | + created = (known after apply) 2026-03-09 00:02:25.135425 | orchestrator | + flavor_id = (known after apply) 2026-03-09 00:02:25.135432 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-09 00:02:25.135438 | orchestrator | + force_delete = false 2026-03-09 00:02:25.135448 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-09 00:02:25.135454 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.135461 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:25.135467 | orchestrator | + image_name = (known after apply) 2026-03-09 00:02:25.135473 | orchestrator | + key_pair = "testbed" 2026-03-09 00:02:25.135479 | orchestrator | + name = "testbed-node-5" 2026-03-09 00:02:25.135485 | orchestrator | + power_state = "active" 2026-03-09 00:02:25.135491 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.135497 | orchestrator | + security_groups = (known after apply) 2026-03-09 00:02:25.135502 | orchestrator | + stop_before_destroy = false 2026-03-09 00:02:25.135508 | orchestrator | + updated = (known after apply) 2026-03-09 00:02:25.135514 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-09 00:02:25.135520 | orchestrator | 2026-03-09 00:02:25.135526 | orchestrator | + block_device { 2026-03-09 00:02:25.135533 | orchestrator | + boot_index = 0 2026-03-09 00:02:25.135538 | orchestrator | + delete_on_termination = false 2026-03-09 00:02:25.135544 | orchestrator | + destination_type = "volume" 2026-03-09 00:02:25.135550 | orchestrator | + multiattach = false 2026-03-09 00:02:25.135556 | orchestrator | + source_type = "volume" 2026-03-09 00:02:25.135562 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:25.135569 | orchestrator | } 2026-03-09 00:02:25.135575 | orchestrator | 2026-03-09 00:02:25.135580 | orchestrator | + network { 2026-03-09 00:02:25.135587 | orchestrator | + access_network = false 2026-03-09 00:02:25.135593 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-09 00:02:25.135600 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-09 00:02:25.135606 | orchestrator | + mac = (known after apply) 2026-03-09 00:02:25.135613 | orchestrator | + name = (known after apply) 2026-03-09 00:02:25.135619 | orchestrator | + port = (known after apply) 2026-03-09 00:02:25.135625 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:25.135632 | orchestrator | } 2026-03-09 00:02:25.135639 | orchestrator | } 2026-03-09 00:02:25.135646 | orchestrator | 2026-03-09 00:02:25.135653 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-03-09 00:02:25.135659 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-03-09 00:02:25.135665 | orchestrator | + fingerprint = (known after apply) 2026-03-09 00:02:25.135672 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.135677 | orchestrator | + name = "testbed" 2026-03-09 00:02:25.135684 | orchestrator | + private_key = (sensitive value) 2026-03-09 00:02:25.135690 | orchestrator | + public_key = (known after apply) 2026-03-09 00:02:25.135697 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.135704 | orchestrator | + user_id = (known after apply) 2026-03-09 00:02:25.135710 | orchestrator | } 2026-03-09 00:02:25.135717 | orchestrator | 2026-03-09 00:02:25.135723 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-03-09 00:02:25.135730 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-09 00:02:25.135756 | orchestrator | + device = (known after apply) 2026-03-09 00:02:25.135763 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.135769 | orchestrator | + instance_id = (known after apply) 2026-03-09 00:02:25.135775 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.135781 | orchestrator | + volume_id = (known after apply) 2026-03-09 00:02:25.135787 | orchestrator | } 2026-03-09 00:02:25.135794 | orchestrator | 2026-03-09 00:02:25.135800 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-03-09 00:02:25.135806 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-09 00:02:25.135812 | orchestrator | + device = (known after apply) 2026-03-09 00:02:25.135818 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.135825 | orchestrator | + instance_id = (known after apply) 2026-03-09 00:02:25.135831 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.135837 | orchestrator | + volume_id = (known after apply) 2026-03-09 00:02:25.135843 | orchestrator | } 2026-03-09 00:02:25.135849 | orchestrator | 2026-03-09 00:02:25.135856 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-03-09 00:02:25.135862 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-09 00:02:25.135869 | orchestrator | + device = (known after apply) 2026-03-09 00:02:25.135875 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.135881 | orchestrator | + instance_id = (known after apply) 2026-03-09 00:02:25.135887 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.135893 | orchestrator | + volume_id = (known after apply) 2026-03-09 00:02:25.135899 | orchestrator | } 2026-03-09 00:02:25.135905 | orchestrator | 2026-03-09 00:02:25.135911 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-03-09 00:02:25.135917 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-09 00:02:25.135923 | orchestrator | + device = (known after apply) 2026-03-09 00:02:25.135929 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.135938 | orchestrator | + instance_id = (known after apply) 2026-03-09 00:02:25.135944 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.135950 | orchestrator | + volume_id = (known after apply) 2026-03-09 00:02:25.135956 | orchestrator | } 2026-03-09 00:02:25.135962 | orchestrator | 2026-03-09 00:02:25.135968 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-03-09 00:02:25.135975 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-09 00:02:25.135981 | orchestrator | + device = (known after apply) 2026-03-09 00:02:25.135987 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.135992 | orchestrator | + instance_id = (known after apply) 2026-03-09 00:02:25.136002 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.136008 | orchestrator | + volume_id = (known after apply) 2026-03-09 00:02:25.136014 | orchestrator | } 2026-03-09 00:02:25.136019 | orchestrator | 2026-03-09 00:02:25.136025 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-03-09 00:02:25.136031 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-09 00:02:25.136038 | orchestrator | + device = (known after apply) 2026-03-09 00:02:25.136044 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.136049 | orchestrator | + instance_id = (known after apply) 2026-03-09 00:02:25.136055 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.136061 | orchestrator | + volume_id = (known after apply) 2026-03-09 00:02:25.136067 | orchestrator | } 2026-03-09 00:02:25.136072 | orchestrator | 2026-03-09 00:02:25.136079 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-03-09 00:02:25.136085 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-09 00:02:25.136091 | orchestrator | + device = (known after apply) 2026-03-09 00:02:25.136097 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.136103 | orchestrator | + instance_id = (known after apply) 2026-03-09 00:02:25.136109 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.136119 | orchestrator | + volume_id = (known after apply) 2026-03-09 00:02:25.136125 | orchestrator | } 2026-03-09 00:02:25.136131 | orchestrator | 2026-03-09 00:02:25.136137 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-03-09 00:02:25.136143 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-09 00:02:25.136150 | orchestrator | + device = (known after apply) 2026-03-09 00:02:25.136156 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.136162 | orchestrator | + instance_id = (known after apply) 2026-03-09 00:02:25.136168 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.136174 | orchestrator | + volume_id = (known after apply) 2026-03-09 00:02:25.136180 | orchestrator | } 2026-03-09 00:02:25.136186 | orchestrator | 2026-03-09 00:02:25.136192 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-03-09 00:02:25.136198 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-09 00:02:25.136203 | orchestrator | + device = (known after apply) 2026-03-09 00:02:25.136210 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.136216 | orchestrator | + instance_id = (known after apply) 2026-03-09 00:02:25.136222 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.136229 | orchestrator | + volume_id = (known after apply) 2026-03-09 00:02:25.136235 | orchestrator | } 2026-03-09 00:02:25.136241 | orchestrator | 2026-03-09 00:02:25.136247 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-03-09 00:02:25.136254 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-03-09 00:02:25.136259 | orchestrator | + fixed_ip = (known after apply) 2026-03-09 00:02:25.136266 | orchestrator | + floating_ip = (known after apply) 2026-03-09 00:02:25.136272 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.136279 | orchestrator | + port_id = (known after apply) 2026-03-09 00:02:25.136285 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.136291 | orchestrator | } 2026-03-09 00:02:25.136297 | orchestrator | 2026-03-09 00:02:25.136302 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-03-09 00:02:25.136308 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-03-09 00:02:25.136314 | orchestrator | + address = (known after apply) 2026-03-09 00:02:25.136321 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:25.136327 | orchestrator | + dns_domain = (known after apply) 2026-03-09 00:02:25.136333 | orchestrator | + dns_name = (known after apply) 2026-03-09 00:02:25.136339 | orchestrator | + fixed_ip = (known after apply) 2026-03-09 00:02:25.136344 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.136350 | orchestrator | + pool = "public" 2026-03-09 00:02:25.136357 | orchestrator | + port_id = (known after apply) 2026-03-09 00:02:25.136362 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.136369 | orchestrator | + subnet_id = (known after apply) 2026-03-09 00:02:25.136375 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.136380 | orchestrator | } 2026-03-09 00:02:25.136386 | orchestrator | 2026-03-09 00:02:25.136392 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-03-09 00:02:25.136398 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-03-09 00:02:25.136403 | orchestrator | + admin_state_up = (known after apply) 2026-03-09 00:02:25.136409 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:25.136416 | orchestrator | + availability_zone_hints = [ 2026-03-09 00:02:25.136422 | orchestrator | + "nova", 2026-03-09 00:02:25.136428 | orchestrator | ] 2026-03-09 00:02:25.136433 | orchestrator | + dns_domain = (known after apply) 2026-03-09 00:02:25.136439 | orchestrator | + external = (known after apply) 2026-03-09 00:02:25.136445 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.136451 | orchestrator | + mtu = (known after apply) 2026-03-09 00:02:25.136457 | orchestrator | + name = "net-testbed-management" 2026-03-09 00:02:25.136463 | orchestrator | + port_security_enabled = (known after apply) 2026-03-09 00:02:25.136474 | orchestrator | + qos_policy_id = (known after apply) 2026-03-09 00:02:25.136481 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.136487 | orchestrator | + shared = (known after apply) 2026-03-09 00:02:25.136493 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.136499 | orchestrator | + transparent_vlan = (known after apply) 2026-03-09 00:02:25.136505 | orchestrator | 2026-03-09 00:02:25.136510 | orchestrator | + segments (known after apply) 2026-03-09 00:02:25.136516 | orchestrator | } 2026-03-09 00:02:25.136522 | orchestrator | 2026-03-09 00:02:25.136529 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-03-09 00:02:25.136534 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-03-09 00:02:25.136546 | orchestrator | + admin_state_up = (known after apply) 2026-03-09 00:02:25.136552 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-09 00:02:25.136558 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-09 00:02:25.136568 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:25.136574 | orchestrator | + device_id = (known after apply) 2026-03-09 00:02:25.136580 | orchestrator | + device_owner = (known after apply) 2026-03-09 00:02:25.136586 | orchestrator | + dns_assignment = (known after apply) 2026-03-09 00:02:25.136593 | orchestrator | + dns_name = (known after apply) 2026-03-09 00:02:25.136599 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.136604 | orchestrator | + mac_address = (known after apply) 2026-03-09 00:02:25.136610 | orchestrator | + network_id = (known after apply) 2026-03-09 00:02:25.136616 | orchestrator | + port_security_enabled = (known after apply) 2026-03-09 00:02:25.136622 | orchestrator | + qos_policy_id = (known after apply) 2026-03-09 00:02:25.136628 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.136633 | orchestrator | + security_group_ids = (known after apply) 2026-03-09 00:02:25.136640 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.136646 | orchestrator | 2026-03-09 00:02:25.136652 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:25.136658 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-09 00:02:25.136664 | orchestrator | } 2026-03-09 00:02:25.136670 | orchestrator | 2026-03-09 00:02:25.136675 | orchestrator | + binding (known after apply) 2026-03-09 00:02:25.136681 | orchestrator | 2026-03-09 00:02:25.136688 | orchestrator | + fixed_ip { 2026-03-09 00:02:25.136694 | orchestrator | + ip_address = "192.168.16.5" 2026-03-09 00:02:25.136700 | orchestrator | + subnet_id = (known after apply) 2026-03-09 00:02:25.136706 | orchestrator | } 2026-03-09 00:02:25.136712 | orchestrator | } 2026-03-09 00:02:25.136717 | orchestrator | 2026-03-09 00:02:25.136723 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-03-09 00:02:25.136729 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-09 00:02:25.136736 | orchestrator | + admin_state_up = (known after apply) 2026-03-09 00:02:25.136741 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-09 00:02:25.136761 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-09 00:02:25.136768 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:25.136775 | orchestrator | + device_id = (known after apply) 2026-03-09 00:02:25.136782 | orchestrator | + device_owner = (known after apply) 2026-03-09 00:02:25.136788 | orchestrator | + dns_assignment = (known after apply) 2026-03-09 00:02:25.136795 | orchestrator | + dns_name = (known after apply) 2026-03-09 00:02:25.136801 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.136808 | orchestrator | + mac_address = (known after apply) 2026-03-09 00:02:25.136814 | orchestrator | + network_id = (known after apply) 2026-03-09 00:02:25.136820 | orchestrator | + port_security_enabled = (known after apply) 2026-03-09 00:02:25.136827 | orchestrator | + qos_policy_id = (known after apply) 2026-03-09 00:02:25.136833 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.136845 | orchestrator | + security_group_ids = (known after apply) 2026-03-09 00:02:25.136851 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.136858 | orchestrator | 2026-03-09 00:02:25.136865 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:25.136871 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-09 00:02:25.136877 | orchestrator | } 2026-03-09 00:02:25.136884 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:25.136890 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-09 00:02:25.136897 | orchestrator | } 2026-03-09 00:02:25.136904 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:25.136911 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-09 00:02:25.136917 | orchestrator | } 2026-03-09 00:02:25.136924 | orchestrator | 2026-03-09 00:02:25.136931 | orchestrator | + binding (known after apply) 2026-03-09 00:02:25.136937 | orchestrator | 2026-03-09 00:02:25.136943 | orchestrator | + fixed_ip { 2026-03-09 00:02:25.136949 | orchestrator | + ip_address = "192.168.16.10" 2026-03-09 00:02:25.136956 | orchestrator | + subnet_id = (known after apply) 2026-03-09 00:02:25.136962 | orchestrator | } 2026-03-09 00:02:25.136968 | orchestrator | } 2026-03-09 00:02:25.136975 | orchestrator | 2026-03-09 00:02:25.136981 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-03-09 00:02:25.136988 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-09 00:02:25.136995 | orchestrator | + admin_state_up = (known after apply) 2026-03-09 00:02:25.137002 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-09 00:02:25.137009 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-09 00:02:25.137015 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:25.137022 | orchestrator | + device_id = (known after apply) 2026-03-09 00:02:25.137028 | orchestrator | + device_owner = (known after apply) 2026-03-09 00:02:25.137034 | orchestrator | + dns_assignment = (known after apply) 2026-03-09 00:02:25.137040 | orchestrator | + dns_name = (known after apply) 2026-03-09 00:02:25.137046 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.137052 | orchestrator | + mac_address = (known after apply) 2026-03-09 00:02:25.137058 | orchestrator | + network_id = (known after apply) 2026-03-09 00:02:25.137065 | orchestrator | + port_security_enabled = (known after apply) 2026-03-09 00:02:25.137071 | orchestrator | + qos_policy_id = (known after apply) 2026-03-09 00:02:25.137077 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.137083 | orchestrator | + security_group_ids = (known after apply) 2026-03-09 00:02:25.137088 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.137094 | orchestrator | 2026-03-09 00:02:25.137099 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:25.137106 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-09 00:02:25.137113 | orchestrator | } 2026-03-09 00:02:25.137120 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:25.137126 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-09 00:02:25.137132 | orchestrator | } 2026-03-09 00:02:25.137138 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:25.137144 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-09 00:02:25.137150 | orchestrator | } 2026-03-09 00:02:25.137156 | orchestrator | 2026-03-09 00:02:25.137162 | orchestrator | + binding (known after apply) 2026-03-09 00:02:25.137168 | orchestrator | 2026-03-09 00:02:25.137174 | orchestrator | + fixed_ip { 2026-03-09 00:02:25.137179 | orchestrator | + ip_address = "192.168.16.11" 2026-03-09 00:02:25.137185 | orchestrator | + subnet_id = (known after apply) 2026-03-09 00:02:25.137190 | orchestrator | } 2026-03-09 00:02:25.137196 | orchestrator | } 2026-03-09 00:02:25.137203 | orchestrator | 2026-03-09 00:02:25.137209 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-03-09 00:02:25.137215 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-09 00:02:25.137222 | orchestrator | + admin_state_up = (known after apply) 2026-03-09 00:02:25.137232 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-09 00:02:25.137237 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-09 00:02:25.137244 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:25.137254 | orchestrator | + device_id = (known after apply) 2026-03-09 00:02:25.138691 | orchestrator | + device_owner = (known after apply) 2026-03-09 00:02:25.138706 | orchestrator | + dns_assignment = (known after apply) 2026-03-09 00:02:25.138713 | orchestrator | + dns_name = (known after apply) 2026-03-09 00:02:25.138725 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.138731 | orchestrator | + mac_address = (known after apply) 2026-03-09 00:02:25.138738 | orchestrator | + network_id = (known after apply) 2026-03-09 00:02:25.138756 | orchestrator | + port_security_enabled = (known after apply) 2026-03-09 00:02:25.138762 | orchestrator | + qos_policy_id = (known after apply) 2026-03-09 00:02:25.138769 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.138775 | orchestrator | + security_group_ids = (known after apply) 2026-03-09 00:02:25.138781 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.138787 | orchestrator | 2026-03-09 00:02:25.138793 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:25.138800 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-09 00:02:25.138807 | orchestrator | } 2026-03-09 00:02:25.138813 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:25.138819 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-09 00:02:25.138825 | orchestrator | } 2026-03-09 00:02:25.138831 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:25.138837 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-09 00:02:25.138842 | orchestrator | } 2026-03-09 00:02:25.138848 | orchestrator | 2026-03-09 00:02:25.138853 | orchestrator | + binding (known after apply) 2026-03-09 00:02:25.138859 | orchestrator | 2026-03-09 00:02:25.138865 | orchestrator | + fixed_ip { 2026-03-09 00:02:25.138872 | orchestrator | + ip_address = "192.168.16.12" 2026-03-09 00:02:25.138878 | orchestrator | + subnet_id = (known after apply) 2026-03-09 00:02:25.138884 | orchestrator | } 2026-03-09 00:02:25.138890 | orchestrator | } 2026-03-09 00:02:25.138895 | orchestrator | 2026-03-09 00:02:25.138902 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-03-09 00:02:25.138909 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-09 00:02:25.138915 | orchestrator | + admin_state_up = (known after apply) 2026-03-09 00:02:25.138921 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-09 00:02:25.138927 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-09 00:02:25.138933 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:25.138939 | orchestrator | + device_id = (known after apply) 2026-03-09 00:02:25.138944 | orchestrator | + device_owner = (known after apply) 2026-03-09 00:02:25.138950 | orchestrator | + dns_assignment = (known after apply) 2026-03-09 00:02:25.138957 | orchestrator | + dns_name = (known after apply) 2026-03-09 00:02:25.138963 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.138969 | orchestrator | + mac_address = (known after apply) 2026-03-09 00:02:25.138975 | orchestrator | + network_id = (known after apply) 2026-03-09 00:02:25.138980 | orchestrator | + port_security_enabled = (known after apply) 2026-03-09 00:02:25.138987 | orchestrator | + qos_policy_id = (known after apply) 2026-03-09 00:02:25.138992 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.138998 | orchestrator | + security_group_ids = (known after apply) 2026-03-09 00:02:25.139003 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.139010 | orchestrator | 2026-03-09 00:02:25.139016 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:25.139022 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-09 00:02:25.139028 | orchestrator | } 2026-03-09 00:02:25.139034 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:25.139039 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-09 00:02:25.139045 | orchestrator | } 2026-03-09 00:02:25.139051 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:25.139058 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-09 00:02:25.139063 | orchestrator | } 2026-03-09 00:02:25.139069 | orchestrator | 2026-03-09 00:02:25.139084 | orchestrator | + binding (known after apply) 2026-03-09 00:02:25.139090 | orchestrator | 2026-03-09 00:02:25.139096 | orchestrator | + fixed_ip { 2026-03-09 00:02:25.139103 | orchestrator | + ip_address = "192.168.16.13" 2026-03-09 00:02:25.139108 | orchestrator | + subnet_id = (known after apply) 2026-03-09 00:02:25.139115 | orchestrator | } 2026-03-09 00:02:25.139121 | orchestrator | } 2026-03-09 00:02:25.139128 | orchestrator | 2026-03-09 00:02:25.139134 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-03-09 00:02:25.139140 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-09 00:02:25.139147 | orchestrator | + admin_state_up = (known after apply) 2026-03-09 00:02:25.139152 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-09 00:02:25.139158 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-09 00:02:25.139164 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:25.139170 | orchestrator | + device_id = (known after apply) 2026-03-09 00:02:25.139176 | orchestrator | + device_owner = (known after apply) 2026-03-09 00:02:25.139182 | orchestrator | + dns_assignment = (known after apply) 2026-03-09 00:02:25.139188 | orchestrator | + dns_name = (known after apply) 2026-03-09 00:02:25.139195 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.139201 | orchestrator | + mac_address = (known after apply) 2026-03-09 00:02:25.139206 | orchestrator | + network_id = (known after apply) 2026-03-09 00:02:25.139213 | orchestrator | + port_security_enabled = (known after apply) 2026-03-09 00:02:25.139219 | orchestrator | + qos_policy_id = (known after apply) 2026-03-09 00:02:25.139225 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.139231 | orchestrator | + security_group_ids = (known after apply) 2026-03-09 00:02:25.139236 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.139243 | orchestrator | 2026-03-09 00:02:25.139250 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:25.139256 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-09 00:02:25.139262 | orchestrator | } 2026-03-09 00:02:25.139268 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:25.139274 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-09 00:02:25.139280 | orchestrator | } 2026-03-09 00:02:25.139285 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:25.139290 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-09 00:02:25.139296 | orchestrator | } 2026-03-09 00:02:25.139302 | orchestrator | 2026-03-09 00:02:25.139308 | orchestrator | + binding (known after apply) 2026-03-09 00:02:25.139314 | orchestrator | 2026-03-09 00:02:25.139320 | orchestrator | + fixed_ip { 2026-03-09 00:02:25.139326 | orchestrator | + ip_address = "192.168.16.14" 2026-03-09 00:02:25.139332 | orchestrator | + subnet_id = (known after apply) 2026-03-09 00:02:25.139338 | orchestrator | } 2026-03-09 00:02:25.139344 | orchestrator | } 2026-03-09 00:02:25.139349 | orchestrator | 2026-03-09 00:02:25.139355 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-03-09 00:02:25.139371 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-09 00:02:25.139377 | orchestrator | + admin_state_up = (known after apply) 2026-03-09 00:02:25.139383 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-09 00:02:25.139389 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-09 00:02:25.139396 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:25.139401 | orchestrator | + device_id = (known after apply) 2026-03-09 00:02:25.139407 | orchestrator | + device_owner = (known after apply) 2026-03-09 00:02:25.139414 | orchestrator | + dns_assignment = (known after apply) 2026-03-09 00:02:25.139420 | orchestrator | + dns_name = (known after apply) 2026-03-09 00:02:25.139425 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.139431 | orchestrator | + mac_address = (known after apply) 2026-03-09 00:02:25.139437 | orchestrator | + network_id = (known after apply) 2026-03-09 00:02:25.139443 | orchestrator | + port_security_enabled = (known after apply) 2026-03-09 00:02:25.139449 | orchestrator | + qos_policy_id = (known after apply) 2026-03-09 00:02:25.139460 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.139466 | orchestrator | + security_group_ids = (known after apply) 2026-03-09 00:02:25.139472 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.139478 | orchestrator | 2026-03-09 00:02:25.139484 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:25.139490 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-09 00:02:25.139496 | orchestrator | } 2026-03-09 00:02:25.139502 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:25.139507 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-09 00:02:25.139513 | orchestrator | } 2026-03-09 00:02:25.139519 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:25.139525 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-09 00:02:25.139531 | orchestrator | } 2026-03-09 00:02:25.139537 | orchestrator | 2026-03-09 00:02:25.139546 | orchestrator | + binding (known after apply) 2026-03-09 00:02:25.139552 | orchestrator | 2026-03-09 00:02:25.139558 | orchestrator | + fixed_ip { 2026-03-09 00:02:25.139564 | orchestrator | + ip_address = "192.168.16.15" 2026-03-09 00:02:25.139570 | orchestrator | + subnet_id = (known after apply) 2026-03-09 00:02:25.139576 | orchestrator | } 2026-03-09 00:02:25.139582 | orchestrator | } 2026-03-09 00:02:25.139589 | orchestrator | 2026-03-09 00:02:25.139595 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-03-09 00:02:25.139601 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-03-09 00:02:25.139606 | orchestrator | + force_destroy = false 2026-03-09 00:02:25.139612 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.139619 | orchestrator | + port_id = (known after apply) 2026-03-09 00:02:25.139625 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.139631 | orchestrator | + router_id = (known after apply) 2026-03-09 00:02:25.139637 | orchestrator | + subnet_id = (known after apply) 2026-03-09 00:02:25.139642 | orchestrator | } 2026-03-09 00:02:25.139648 | orchestrator | 2026-03-09 00:02:25.139654 | orchestrator | # openstack_networking_router_v2.router will be created 2026-03-09 00:02:25.139660 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-03-09 00:02:25.139666 | orchestrator | + admin_state_up = (known after apply) 2026-03-09 00:02:25.139672 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:25.139677 | orchestrator | + availability_zone_hints = [ 2026-03-09 00:02:25.139683 | orchestrator | + "nova", 2026-03-09 00:02:25.139689 | orchestrator | ] 2026-03-09 00:02:25.139695 | orchestrator | + distributed = (known after apply) 2026-03-09 00:02:25.139701 | orchestrator | + enable_snat = (known after apply) 2026-03-09 00:02:25.139707 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-03-09 00:02:25.139713 | orchestrator | + external_qos_policy_id = (known after apply) 2026-03-09 00:02:25.139718 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.139725 | orchestrator | + name = "testbed" 2026-03-09 00:02:25.139731 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.139737 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.139755 | orchestrator | 2026-03-09 00:02:25.139761 | orchestrator | + external_fixed_ip (known after apply) 2026-03-09 00:02:25.139767 | orchestrator | } 2026-03-09 00:02:25.139773 | orchestrator | 2026-03-09 00:02:25.139779 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-03-09 00:02:25.139786 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-03-09 00:02:25.139792 | orchestrator | + description = "ssh" 2026-03-09 00:02:25.139798 | orchestrator | + direction = "ingress" 2026-03-09 00:02:25.139804 | orchestrator | + ethertype = "IPv4" 2026-03-09 00:02:25.139810 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.139815 | orchestrator | + port_range_max = 22 2026-03-09 00:02:25.139822 | orchestrator | + port_range_min = 22 2026-03-09 00:02:25.139827 | orchestrator | + protocol = "tcp" 2026-03-09 00:02:25.139834 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.139845 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-09 00:02:25.139851 | orchestrator | + remote_group_id = (known after apply) 2026-03-09 00:02:25.139856 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-09 00:02:25.139862 | orchestrator | + security_group_id = (known after apply) 2026-03-09 00:02:25.139868 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.139873 | orchestrator | } 2026-03-09 00:02:25.139879 | orchestrator | 2026-03-09 00:02:25.139886 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-03-09 00:02:25.139892 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-03-09 00:02:25.139898 | orchestrator | + description = "wireguard" 2026-03-09 00:02:25.139904 | orchestrator | + direction = "ingress" 2026-03-09 00:02:25.139910 | orchestrator | + ethertype = "IPv4" 2026-03-09 00:02:25.139916 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.139922 | orchestrator | + port_range_max = 51820 2026-03-09 00:02:25.139928 | orchestrator | + port_range_min = 51820 2026-03-09 00:02:25.139934 | orchestrator | + protocol = "udp" 2026-03-09 00:02:25.139939 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.139945 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-09 00:02:25.139951 | orchestrator | + remote_group_id = (known after apply) 2026-03-09 00:02:25.139957 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-09 00:02:25.139963 | orchestrator | + security_group_id = (known after apply) 2026-03-09 00:02:25.139972 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.139978 | orchestrator | } 2026-03-09 00:02:25.139985 | orchestrator | 2026-03-09 00:02:25.139991 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-03-09 00:02:25.139996 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-03-09 00:02:25.140002 | orchestrator | + direction = "ingress" 2026-03-09 00:02:25.140008 | orchestrator | + ethertype = "IPv4" 2026-03-09 00:02:25.140014 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.140020 | orchestrator | + protocol = "tcp" 2026-03-09 00:02:25.140026 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.140032 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-09 00:02:25.140038 | orchestrator | + remote_group_id = (known after apply) 2026-03-09 00:02:25.140044 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-09 00:02:25.140050 | orchestrator | + security_group_id = (known after apply) 2026-03-09 00:02:25.140056 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.140062 | orchestrator | } 2026-03-09 00:02:25.140069 | orchestrator | 2026-03-09 00:02:25.140074 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-03-09 00:02:25.140081 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-03-09 00:02:25.140087 | orchestrator | + direction = "ingress" 2026-03-09 00:02:25.140093 | orchestrator | + ethertype = "IPv4" 2026-03-09 00:02:25.140098 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.140104 | orchestrator | + protocol = "udp" 2026-03-09 00:02:25.140111 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.140117 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-09 00:02:25.140122 | orchestrator | + remote_group_id = (known after apply) 2026-03-09 00:02:25.140128 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-09 00:02:25.140135 | orchestrator | + security_group_id = (known after apply) 2026-03-09 00:02:25.140141 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.140147 | orchestrator | } 2026-03-09 00:02:25.140153 | orchestrator | 2026-03-09 00:02:25.140158 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-03-09 00:02:25.140168 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-03-09 00:02:25.140174 | orchestrator | + direction = "ingress" 2026-03-09 00:02:25.140181 | orchestrator | + ethertype = "IPv4" 2026-03-09 00:02:25.140187 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.140192 | orchestrator | + protocol = "icmp" 2026-03-09 00:02:25.140199 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.140204 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-09 00:02:25.140210 | orchestrator | + remote_group_id = (known after apply) 2026-03-09 00:02:25.140216 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-09 00:02:25.140222 | orchestrator | + security_group_id = (known after apply) 2026-03-09 00:02:25.140228 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.140234 | orchestrator | } 2026-03-09 00:02:25.140239 | orchestrator | 2026-03-09 00:02:25.140245 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-03-09 00:02:25.140252 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-03-09 00:02:25.140258 | orchestrator | + direction = "ingress" 2026-03-09 00:02:25.140264 | orchestrator | + ethertype = "IPv4" 2026-03-09 00:02:25.140270 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.140276 | orchestrator | + protocol = "tcp" 2026-03-09 00:02:25.140281 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.140287 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-09 00:02:25.140297 | orchestrator | + remote_group_id = (known after apply) 2026-03-09 00:02:25.140303 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-09 00:02:25.140309 | orchestrator | + security_group_id = (known after apply) 2026-03-09 00:02:25.140315 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.140320 | orchestrator | } 2026-03-09 00:02:25.140326 | orchestrator | 2026-03-09 00:02:25.140333 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-03-09 00:02:25.140338 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-03-09 00:02:25.140344 | orchestrator | + direction = "ingress" 2026-03-09 00:02:25.140351 | orchestrator | + ethertype = "IPv4" 2026-03-09 00:02:25.140356 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.140362 | orchestrator | + protocol = "udp" 2026-03-09 00:02:25.140368 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.140374 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-09 00:02:25.140380 | orchestrator | + remote_group_id = (known after apply) 2026-03-09 00:02:25.140386 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-09 00:02:25.140392 | orchestrator | + security_group_id = (known after apply) 2026-03-09 00:02:25.140397 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.140403 | orchestrator | } 2026-03-09 00:02:25.140409 | orchestrator | 2026-03-09 00:02:25.140414 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-03-09 00:02:25.140420 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-03-09 00:02:25.140426 | orchestrator | + direction = "ingress" 2026-03-09 00:02:25.140435 | orchestrator | + ethertype = "IPv4" 2026-03-09 00:02:25.140441 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.140447 | orchestrator | + protocol = "icmp" 2026-03-09 00:02:25.140453 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.140459 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-09 00:02:25.140465 | orchestrator | + remote_group_id = (known after apply) 2026-03-09 00:02:25.140471 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-09 00:02:25.140477 | orchestrator | + security_group_id = (known after apply) 2026-03-09 00:02:25.140483 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.140498 | orchestrator | } 2026-03-09 00:02:25.140504 | orchestrator | 2026-03-09 00:02:25.140514 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-03-09 00:02:25.140521 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-03-09 00:02:25.140526 | orchestrator | + description = "vrrp" 2026-03-09 00:02:25.140532 | orchestrator | + direction = "ingress" 2026-03-09 00:02:25.140538 | orchestrator | + ethertype = "IPv4" 2026-03-09 00:02:25.140544 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.140550 | orchestrator | + protocol = "112" 2026-03-09 00:02:25.140557 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.140563 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-09 00:02:25.140570 | orchestrator | + remote_group_id = (known after apply) 2026-03-09 00:02:25.140577 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-09 00:02:25.140582 | orchestrator | + security_group_id = (known after apply) 2026-03-09 00:02:25.140588 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.140595 | orchestrator | } 2026-03-09 00:02:25.140600 | orchestrator | 2026-03-09 00:02:25.140606 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-03-09 00:02:25.140613 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-03-09 00:02:25.140619 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:25.140625 | orchestrator | + description = "management security group" 2026-03-09 00:02:25.140631 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.140637 | orchestrator | + name = "testbed-management" 2026-03-09 00:02:25.140643 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.140649 | orchestrator | + stateful = (known after apply) 2026-03-09 00:02:25.140655 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.140661 | orchestrator | } 2026-03-09 00:02:25.140666 | orchestrator | 2026-03-09 00:02:25.140672 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-03-09 00:02:25.140678 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-03-09 00:02:25.140684 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:25.140689 | orchestrator | + description = "node security group" 2026-03-09 00:02:25.140695 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.140702 | orchestrator | + name = "testbed-node" 2026-03-09 00:02:25.140707 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.140713 | orchestrator | + stateful = (known after apply) 2026-03-09 00:02:25.140720 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.140726 | orchestrator | } 2026-03-09 00:02:25.140732 | orchestrator | 2026-03-09 00:02:25.140738 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-03-09 00:02:25.140755 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-03-09 00:02:25.140762 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:25.140767 | orchestrator | + cidr = "192.168.16.0/20" 2026-03-09 00:02:25.140773 | orchestrator | + dns_nameservers = [ 2026-03-09 00:02:25.140779 | orchestrator | + "8.8.8.8", 2026-03-09 00:02:25.140785 | orchestrator | + "9.9.9.9", 2026-03-09 00:02:25.140792 | orchestrator | ] 2026-03-09 00:02:25.140798 | orchestrator | + enable_dhcp = true 2026-03-09 00:02:25.140803 | orchestrator | + gateway_ip = (known after apply) 2026-03-09 00:02:25.140810 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.140816 | orchestrator | + ip_version = 4 2026-03-09 00:02:25.140821 | orchestrator | + ipv6_address_mode = (known after apply) 2026-03-09 00:02:25.140827 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-03-09 00:02:25.140833 | orchestrator | + name = "subnet-testbed-management" 2026-03-09 00:02:25.140839 | orchestrator | + network_id = (known after apply) 2026-03-09 00:02:25.140845 | orchestrator | + no_gateway = false 2026-03-09 00:02:25.140851 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.140856 | orchestrator | + service_types = (known after apply) 2026-03-09 00:02:25.140866 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.140872 | orchestrator | 2026-03-09 00:02:25.140878 | orchestrator | + allocation_pool { 2026-03-09 00:02:25.140884 | orchestrator | + end = "192.168.31.250" 2026-03-09 00:02:25.140890 | orchestrator | + start = "192.168.31.200" 2026-03-09 00:02:25.140897 | orchestrator | } 2026-03-09 00:02:25.140903 | orchestrator | } 2026-03-09 00:02:25.140909 | orchestrator | 2026-03-09 00:02:25.140915 | orchestrator | # terraform_data.image will be created 2026-03-09 00:02:25.140922 | orchestrator | + resource "terraform_data" "image" { 2026-03-09 00:02:25.140928 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.140934 | orchestrator | + input = "Ubuntu 24.04" 2026-03-09 00:02:25.140941 | orchestrator | + output = (known after apply) 2026-03-09 00:02:25.140947 | orchestrator | } 2026-03-09 00:02:25.140954 | orchestrator | 2026-03-09 00:02:25.140960 | orchestrator | # terraform_data.image_node will be created 2026-03-09 00:02:25.140965 | orchestrator | + resource "terraform_data" "image_node" { 2026-03-09 00:02:25.140971 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.140977 | orchestrator | + input = "Ubuntu 24.04" 2026-03-09 00:02:25.140983 | orchestrator | + output = (known after apply) 2026-03-09 00:02:25.140989 | orchestrator | } 2026-03-09 00:02:25.140995 | orchestrator | 2026-03-09 00:02:25.141001 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-03-09 00:02:25.141007 | orchestrator | 2026-03-09 00:02:25.141013 | orchestrator | Changes to Outputs: 2026-03-09 00:02:25.141019 | orchestrator | + manager_address = (sensitive value) 2026-03-09 00:02:25.141025 | orchestrator | + private_key = (sensitive value) 2026-03-09 00:02:25.356219 | orchestrator | terraform_data.image: Creating... 2026-03-09 00:02:25.356291 | orchestrator | terraform_data.image: Creation complete after 0s [id=7efdb5ac-2526-f219-acaa-9c4a76ae0fb6] 2026-03-09 00:02:25.357270 | orchestrator | terraform_data.image_node: Creating... 2026-03-09 00:02:25.358251 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=610340bd-769f-96f6-7585-db8607551868] 2026-03-09 00:02:25.378270 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-03-09 00:02:25.384808 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-03-09 00:02:25.388423 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-03-09 00:02:25.388464 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-03-09 00:02:25.393249 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-03-09 00:02:25.393308 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-03-09 00:02:25.393630 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-03-09 00:02:25.394202 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-03-09 00:02:25.394340 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-03-09 00:02:25.394558 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-03-09 00:02:25.832307 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-09 00:02:25.836630 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-03-09 00:02:25.849417 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-09 00:02:25.853503 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-03-09 00:02:25.929218 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-03-09 00:02:25.937903 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-03-09 00:02:26.554569 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 2s [id=dc2afc26-75fa-4683-a9f2-cdd27e484f01] 2026-03-09 00:02:26.565342 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-03-09 00:02:29.067485 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=e401ede7-34f1-42e1-9654-8299af9dca9f] 2026-03-09 00:02:29.068067 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=32378689-09a5-476b-b0b0-ef0e7774d8c3] 2026-03-09 00:02:29.074392 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-03-09 00:02:29.078140 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-03-09 00:02:29.092384 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=9db61a68-6a19-4ffe-9dc6-6109c8ad90ec] 2026-03-09 00:02:29.100680 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=96371732-37bf-4fbc-835d-bb1aff74906c] 2026-03-09 00:02:29.103124 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-03-09 00:02:29.110060 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-03-09 00:02:29.131408 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=a6833780-5d8c-49cb-baf4-596d7658d284] 2026-03-09 00:02:29.139275 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-03-09 00:02:29.158078 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=d782a267-8601-4e70-9eb9-845bf96c3393] 2026-03-09 00:02:29.158710 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=af02e055-7e15-40a4-be69-d990d822f0ba] 2026-03-09 00:02:29.169874 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-03-09 00:02:29.173903 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-03-09 00:02:29.178502 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=a4e768b99739cd2e4dff053a6caf1dbf8fa28443] 2026-03-09 00:02:29.187480 | orchestrator | local_file.id_rsa_pub: Creating... 2026-03-09 00:02:29.192574 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=c35e0cfe72fa7113d06630d4b48aa79590de974f] 2026-03-09 00:02:29.200135 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-03-09 00:02:29.346346 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=bc061b31-9341-4fe1-bc4e-7c107d37f2f9] 2026-03-09 00:02:29.369597 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=5e6a3ca4-1946-4dac-9dc1-38bfb1214560] 2026-03-09 00:02:29.950712 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=bf98e2f3-a84e-4817-86e1-84a4fd412f64] 2026-03-09 00:02:30.267648 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=f3ba0c27-8f87-4495-a73a-3744f8375a99] 2026-03-09 00:02:30.276115 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-03-09 00:02:32.526913 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=e15e63a5-d93c-4538-92f1-da1d17102847] 2026-03-09 00:02:32.556599 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=c47531ab-b779-461a-8b30-0be29ea5188d] 2026-03-09 00:02:32.594592 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=c16889b1-7b46-4e25-af69-310fb50c7b7e] 2026-03-09 00:02:32.643666 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=943da615-a78a-4b04-b113-1769e9052e23] 2026-03-09 00:02:32.656338 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=f4e4dbd9-9c57-4314-9e8e-bca4232cec07] 2026-03-09 00:02:32.690470 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=80fcb8a4-b4d4-4fb3-9956-dfeaf07bcdcb] 2026-03-09 00:02:33.734402 | orchestrator | openstack_networking_router_v2.router: Creation complete after 4s [id=65534832-928c-4059-81ae-9e76e06f99d7] 2026-03-09 00:02:33.740935 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-03-09 00:02:33.741685 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-03-09 00:02:33.744514 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-03-09 00:02:33.931208 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=6debe73b-18c3-481d-a5bc-dc763aac6bc6] 2026-03-09 00:02:33.939639 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-03-09 00:02:33.943063 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-03-09 00:02:33.943115 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-03-09 00:02:33.943136 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-03-09 00:02:33.946180 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-03-09 00:02:33.946341 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-03-09 00:02:33.988085 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=75e1f306-62d7-4b4d-a26d-c2f0b371989e] 2026-03-09 00:02:34.004152 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-03-09 00:02:34.004229 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-03-09 00:02:34.004659 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-03-09 00:02:34.189675 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=59062cc4-0391-481e-a67b-b50b1ccb5b1f] 2026-03-09 00:02:34.198599 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-03-09 00:02:34.235365 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=10a92beb-9177-47a4-a267-b078e3c97039] 2026-03-09 00:02:34.243786 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-03-09 00:02:34.416172 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=865aae8b-b866-4ec4-a967-dc7628a9c2d9] 2026-03-09 00:02:34.425570 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-03-09 00:02:34.788363 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=578ed0dc-ed20-4bd0-b88a-9da83791afda] 2026-03-09 00:02:34.800094 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-03-09 00:02:34.918542 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=cf568492-6f12-4b17-91af-7ceece7a1b1f] 2026-03-09 00:02:34.930136 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-03-09 00:02:35.183512 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=5cb6413c-fe8e-46d2-b5df-0098e1d3c81a] 2026-03-09 00:02:35.194621 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-03-09 00:02:35.212430 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=8f252c34-83fb-4012-9c79-2d5b13f04aa8] 2026-03-09 00:02:35.221339 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-03-09 00:02:35.382749 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=3ae2c4ab-c563-417d-b757-faa71e5e26f8] 2026-03-09 00:02:35.475722 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=a9d9ce83-0eae-412f-a650-2ca73219f006] 2026-03-09 00:02:35.923804 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 2s [id=3c4c7f5f-d6df-464d-9427-a5186cae6e0d] 2026-03-09 00:02:36.194420 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 2s [id=9033ae1d-9fa2-460b-9083-177c4e68775a] 2026-03-09 00:02:36.313671 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=c90a9860-6c81-4d8d-895a-f53318f11692] 2026-03-09 00:02:36.672055 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 2s [id=c6dc27cb-fac0-4132-aa52-b8c2c7775ffd] 2026-03-09 00:02:36.854731 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=3d36184a-7967-434b-a264-16763b13c6b3] 2026-03-09 00:02:36.862082 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-03-09 00:02:37.774711 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 4s [id=bc106408-d6aa-4a73-a3ae-45529c1fb9d5] 2026-03-09 00:02:37.885425 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 3s [id=20c52e6e-6ea5-4078-9337-342019cced88] 2026-03-09 00:02:37.908873 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 3s [id=f79066b4-350b-48cb-99be-6aa8b536668b] 2026-03-09 00:02:37.931433 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-03-09 00:02:37.937703 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-03-09 00:02:37.937935 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-03-09 00:02:37.945182 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-03-09 00:02:37.952961 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-03-09 00:02:37.960310 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-03-09 00:02:39.126623 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=73757b07-5110-4c81-a220-b42c63bf1f07] 2026-03-09 00:02:39.137369 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-03-09 00:02:39.138619 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-03-09 00:02:39.140125 | orchestrator | local_file.inventory: Creating... 2026-03-09 00:02:39.146151 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=84b203c562793153a14075e34f57a5381937c9f9] 2026-03-09 00:02:39.146920 | orchestrator | local_file.inventory: Creation complete after 0s [id=7cf0c8d8f4b2aac73f002d271904e91569f0d425] 2026-03-09 00:02:40.649715 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 2s [id=73757b07-5110-4c81-a220-b42c63bf1f07] 2026-03-09 00:02:47.934361 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-03-09 00:02:47.939631 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-03-09 00:02:47.939672 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-03-09 00:02:47.952087 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-03-09 00:02:47.955251 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-03-09 00:02:47.961588 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-03-09 00:02:57.941350 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-03-09 00:02:57.941452 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-03-09 00:02:57.941465 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-03-09 00:02:57.952984 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-03-09 00:02:57.956359 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-03-09 00:02:57.962756 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-03-09 00:03:07.950690 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-03-09 00:03:07.950851 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-03-09 00:03:07.950891 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-03-09 00:03:07.953989 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-03-09 00:03:07.957276 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-03-09 00:03:07.963714 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-03-09 00:03:08.876549 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=5a3eac04-5174-47a6-8e71-eccc59551a18] 2026-03-09 00:03:09.150431 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=5de875d1-77d5-456b-b9be-33133af81e20] 2026-03-09 00:03:17.950987 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-03-09 00:03:17.951068 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-03-09 00:03:17.958289 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-03-09 00:03:17.964520 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-03-09 00:03:19.048329 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 41s [id=7dc4cb50-714c-4a89-bd70-0ce24645b15d] 2026-03-09 00:03:19.221127 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 41s [id=695e3223-0283-4d7e-b69a-744641e9f2cb] 2026-03-09 00:03:27.959406 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [50s elapsed] 2026-03-09 00:03:27.959659 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [50s elapsed] 2026-03-09 00:03:28.972988 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 51s [id=a138c4c9-cfaf-4ec0-a2c0-8bc461b68e20] 2026-03-09 00:03:37.960181 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [1m0s elapsed] 2026-03-09 00:03:39.345583 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 1m1s [id=1e0782c4-4236-4377-8921-27b78abc6c48] 2026-03-09 00:03:39.364502 | orchestrator | null_resource.node_semaphore: Creating... 2026-03-09 00:03:39.376356 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-03-09 00:03:39.381467 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=1760643099727875130] 2026-03-09 00:03:39.390995 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-03-09 00:03:39.393547 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-03-09 00:03:39.394679 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-03-09 00:03:39.412308 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-03-09 00:03:39.420008 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-03-09 00:03:39.430982 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-03-09 00:03:39.442544 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-03-09 00:03:39.447749 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-03-09 00:03:39.448309 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-03-09 00:03:42.790785 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=1e0782c4-4236-4377-8921-27b78abc6c48/96371732-37bf-4fbc-835d-bb1aff74906c] 2026-03-09 00:03:42.817514 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=5de875d1-77d5-456b-b9be-33133af81e20/5e6a3ca4-1946-4dac-9dc1-38bfb1214560] 2026-03-09 00:03:42.835441 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=695e3223-0283-4d7e-b69a-744641e9f2cb/e401ede7-34f1-42e1-9654-8299af9dca9f] 2026-03-09 00:03:42.854679 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=5de875d1-77d5-456b-b9be-33133af81e20/9db61a68-6a19-4ffe-9dc6-6109c8ad90ec] 2026-03-09 00:03:42.864217 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=695e3223-0283-4d7e-b69a-744641e9f2cb/d782a267-8601-4e70-9eb9-845bf96c3393] 2026-03-09 00:03:42.874658 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=1e0782c4-4236-4377-8921-27b78abc6c48/32378689-09a5-476b-b0b0-ef0e7774d8c3] 2026-03-09 00:03:48.940009 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=5de875d1-77d5-456b-b9be-33133af81e20/af02e055-7e15-40a4-be69-d990d822f0ba] 2026-03-09 00:03:48.957695 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 10s [id=1e0782c4-4236-4377-8921-27b78abc6c48/bc061b31-9341-4fe1-bc4e-7c107d37f2f9] 2026-03-09 00:03:48.969005 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=695e3223-0283-4d7e-b69a-744641e9f2cb/a6833780-5d8c-49cb-baf4-596d7658d284] 2026-03-09 00:03:49.446206 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-03-09 00:03:59.454358 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-03-09 00:03:59.925970 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=1763ad3a-840e-4278-93c8-7b2ba73b6b8a] 2026-03-09 00:03:59.971033 | orchestrator | 2026-03-09 00:03:59.971121 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-03-09 00:03:59.971134 | orchestrator | 2026-03-09 00:03:59.971144 | orchestrator | Outputs: 2026-03-09 00:03:59.971152 | orchestrator | 2026-03-09 00:03:59.971160 | orchestrator | manager_address = 2026-03-09 00:03:59.971169 | orchestrator | private_key = 2026-03-09 00:04:00.040895 | orchestrator | ok: Runtime: 0:01:43.648505 2026-03-09 00:04:00.061137 | 2026-03-09 00:04:00.061294 | TASK [Create infrastructure (stable)] 2026-03-09 00:04:00.599623 | orchestrator | skipping: Conditional result was False 2026-03-09 00:04:00.620999 | 2026-03-09 00:04:00.621192 | TASK [Fetch manager address] 2026-03-09 00:04:01.081583 | orchestrator | ok 2026-03-09 00:04:01.090967 | 2026-03-09 00:04:01.091097 | TASK [Set manager_host address] 2026-03-09 00:04:01.171655 | orchestrator | ok 2026-03-09 00:04:01.181482 | 2026-03-09 00:04:01.181603 | LOOP [Update ansible collections] 2026-03-09 00:04:05.235146 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-09 00:04:05.235537 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-09 00:04:05.235620 | orchestrator | Starting galaxy collection install process 2026-03-09 00:04:05.235673 | orchestrator | Process install dependency map 2026-03-09 00:04:05.235717 | orchestrator | Starting collection install process 2026-03-09 00:04:05.235776 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2026-03-09 00:04:05.235808 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2026-03-09 00:04:05.235850 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-03-09 00:04:05.235937 | orchestrator | ok: Item: commons Runtime: 0:00:03.696356 2026-03-09 00:04:07.090497 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-09 00:04:07.090670 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-09 00:04:07.090724 | orchestrator | Starting galaxy collection install process 2026-03-09 00:04:07.090819 | orchestrator | Process install dependency map 2026-03-09 00:04:07.090894 | orchestrator | Starting collection install process 2026-03-09 00:04:07.090931 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2026-03-09 00:04:07.090967 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2026-03-09 00:04:07.091001 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-09 00:04:07.091062 | orchestrator | ok: Item: services Runtime: 0:00:01.590852 2026-03-09 00:04:07.118898 | 2026-03-09 00:04:07.119099 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-09 00:04:17.696886 | orchestrator | ok 2026-03-09 00:04:17.707082 | 2026-03-09 00:04:17.707195 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-09 00:05:17.752793 | orchestrator | ok 2026-03-09 00:05:17.761577 | 2026-03-09 00:05:17.761696 | TASK [Fetch manager ssh hostkey] 2026-03-09 00:05:19.346419 | orchestrator | Output suppressed because no_log was given 2026-03-09 00:05:19.366025 | 2026-03-09 00:05:19.366185 | TASK [Get ssh keypair from terraform environment] 2026-03-09 00:05:19.905574 | orchestrator | ok: Runtime: 0:00:00.013327 2026-03-09 00:05:19.922953 | 2026-03-09 00:05:19.923124 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-09 00:05:19.971630 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-03-09 00:05:19.981809 | 2026-03-09 00:05:19.981937 | TASK [Run manager part 0] 2026-03-09 00:05:20.905988 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-09 00:05:20.960893 | orchestrator | 2026-03-09 00:05:20.960949 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-03-09 00:05:20.960958 | orchestrator | 2026-03-09 00:05:20.960974 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-03-09 00:05:22.962330 | orchestrator | ok: [testbed-manager] 2026-03-09 00:05:22.962396 | orchestrator | 2026-03-09 00:05:22.962424 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-09 00:05:22.962435 | orchestrator | 2026-03-09 00:05:22.962445 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-09 00:05:25.054131 | orchestrator | ok: [testbed-manager] 2026-03-09 00:05:25.054251 | orchestrator | 2026-03-09 00:05:25.054281 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-09 00:05:25.786931 | orchestrator | ok: [testbed-manager] 2026-03-09 00:05:25.787028 | orchestrator | 2026-03-09 00:05:25.787045 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-09 00:05:25.835319 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:05:25.835374 | orchestrator | 2026-03-09 00:05:25.835385 | orchestrator | TASK [Update package cache] **************************************************** 2026-03-09 00:05:25.880814 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:05:25.880861 | orchestrator | 2026-03-09 00:05:25.880867 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-09 00:05:25.914243 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:05:25.914316 | orchestrator | 2026-03-09 00:05:25.914329 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-09 00:05:25.951649 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:05:25.951752 | orchestrator | 2026-03-09 00:05:25.951780 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-09 00:05:25.999502 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:05:25.999590 | orchestrator | 2026-03-09 00:05:25.999609 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-03-09 00:05:26.042744 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:05:26.042797 | orchestrator | 2026-03-09 00:05:26.042806 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-03-09 00:05:26.090726 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:05:26.090807 | orchestrator | 2026-03-09 00:05:26.090825 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-03-09 00:05:26.860189 | orchestrator | changed: [testbed-manager] 2026-03-09 00:05:26.860252 | orchestrator | 2026-03-09 00:05:26.860263 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-03-09 00:08:20.188531 | orchestrator | changed: [testbed-manager] 2026-03-09 00:08:20.188787 | orchestrator | 2026-03-09 00:08:20.188819 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-09 00:09:45.362900 | orchestrator | changed: [testbed-manager] 2026-03-09 00:09:45.362978 | orchestrator | 2026-03-09 00:09:45.362995 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-09 00:10:10.159934 | orchestrator | changed: [testbed-manager] 2026-03-09 00:10:10.160016 | orchestrator | 2026-03-09 00:10:10.160028 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-09 00:10:19.066491 | orchestrator | changed: [testbed-manager] 2026-03-09 00:10:19.066530 | orchestrator | 2026-03-09 00:10:19.066536 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-09 00:10:19.115057 | orchestrator | ok: [testbed-manager] 2026-03-09 00:10:19.115101 | orchestrator | 2026-03-09 00:10:19.115110 | orchestrator | TASK [Get current user] ******************************************************** 2026-03-09 00:10:19.942931 | orchestrator | ok: [testbed-manager] 2026-03-09 00:10:19.942972 | orchestrator | 2026-03-09 00:10:19.942981 | orchestrator | TASK [Create venv directory] *************************************************** 2026-03-09 00:10:20.682082 | orchestrator | changed: [testbed-manager] 2026-03-09 00:10:20.682138 | orchestrator | 2026-03-09 00:10:20.682147 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-03-09 00:10:27.301323 | orchestrator | changed: [testbed-manager] 2026-03-09 00:10:27.301386 | orchestrator | 2026-03-09 00:10:27.301412 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-03-09 00:10:33.582397 | orchestrator | changed: [testbed-manager] 2026-03-09 00:10:33.582466 | orchestrator | 2026-03-09 00:10:33.582483 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-03-09 00:10:36.404068 | orchestrator | changed: [testbed-manager] 2026-03-09 00:10:36.404110 | orchestrator | 2026-03-09 00:10:36.404118 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-03-09 00:10:38.403106 | orchestrator | changed: [testbed-manager] 2026-03-09 00:10:38.403144 | orchestrator | 2026-03-09 00:10:38.403151 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-03-09 00:10:39.647233 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-09 00:10:39.647353 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-09 00:10:39.647370 | orchestrator | 2026-03-09 00:10:39.647383 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-03-09 00:10:39.743685 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-09 00:10:39.743739 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-09 00:10:39.743747 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-09 00:10:39.743754 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-09 00:10:53.516112 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-09 00:10:53.516216 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-09 00:10:53.516233 | orchestrator | 2026-03-09 00:10:53.516246 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-03-09 00:10:54.075221 | orchestrator | changed: [testbed-manager] 2026-03-09 00:10:54.075282 | orchestrator | 2026-03-09 00:10:54.075297 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-03-09 00:12:14.818006 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-03-09 00:12:14.818172 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-03-09 00:12:14.818205 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-03-09 00:12:14.818228 | orchestrator | 2026-03-09 00:12:14.818251 | orchestrator | TASK [Install local collections] *********************************************** 2026-03-09 00:12:17.186604 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-03-09 00:12:17.187623 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-03-09 00:12:17.187704 | orchestrator | 2026-03-09 00:12:17.187724 | orchestrator | PLAY [Create operator user] **************************************************** 2026-03-09 00:12:17.187737 | orchestrator | 2026-03-09 00:12:17.187751 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-09 00:12:18.636558 | orchestrator | ok: [testbed-manager] 2026-03-09 00:12:18.636647 | orchestrator | 2026-03-09 00:12:18.636666 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-09 00:12:18.690301 | orchestrator | ok: [testbed-manager] 2026-03-09 00:12:18.690411 | orchestrator | 2026-03-09 00:12:18.690428 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-09 00:12:18.763519 | orchestrator | ok: [testbed-manager] 2026-03-09 00:12:18.763606 | orchestrator | 2026-03-09 00:12:18.763623 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-09 00:12:19.601574 | orchestrator | changed: [testbed-manager] 2026-03-09 00:12:19.601697 | orchestrator | 2026-03-09 00:12:19.601715 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-09 00:12:20.346133 | orchestrator | changed: [testbed-manager] 2026-03-09 00:12:20.346183 | orchestrator | 2026-03-09 00:12:20.346192 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-09 00:12:21.786326 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-03-09 00:12:21.786428 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-03-09 00:12:21.786446 | orchestrator | 2026-03-09 00:12:21.786476 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-09 00:12:23.239563 | orchestrator | changed: [testbed-manager] 2026-03-09 00:12:23.239620 | orchestrator | 2026-03-09 00:12:23.239629 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-09 00:12:25.021397 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-03-09 00:12:25.021453 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-03-09 00:12:25.021461 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-03-09 00:12:25.021468 | orchestrator | 2026-03-09 00:12:25.021476 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-09 00:12:25.085581 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:12:25.085624 | orchestrator | 2026-03-09 00:12:25.085633 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-09 00:12:25.160885 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:12:25.160981 | orchestrator | 2026-03-09 00:12:25.161009 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-09 00:12:25.756686 | orchestrator | changed: [testbed-manager] 2026-03-09 00:12:25.756763 | orchestrator | 2026-03-09 00:12:25.756779 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-09 00:12:25.832933 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:12:25.832971 | orchestrator | 2026-03-09 00:12:25.832977 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-09 00:12:26.737619 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-09 00:12:26.737875 | orchestrator | changed: [testbed-manager] 2026-03-09 00:12:26.737902 | orchestrator | 2026-03-09 00:12:26.737921 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-09 00:12:26.777557 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:12:26.777608 | orchestrator | 2026-03-09 00:12:26.777616 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-09 00:12:26.799034 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:12:26.799105 | orchestrator | 2026-03-09 00:12:26.799117 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-09 00:12:26.836357 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:12:26.836441 | orchestrator | 2026-03-09 00:12:26.836450 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-09 00:12:26.904047 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:12:26.904100 | orchestrator | 2026-03-09 00:12:26.904108 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-09 00:12:27.645764 | orchestrator | ok: [testbed-manager] 2026-03-09 00:12:27.647061 | orchestrator | 2026-03-09 00:12:27.647097 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-09 00:12:27.647110 | orchestrator | 2026-03-09 00:12:27.647122 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-09 00:12:29.124457 | orchestrator | ok: [testbed-manager] 2026-03-09 00:12:29.124522 | orchestrator | 2026-03-09 00:12:29.124530 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-03-09 00:12:30.094629 | orchestrator | changed: [testbed-manager] 2026-03-09 00:12:30.094701 | orchestrator | 2026-03-09 00:12:30.094719 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:12:30.094730 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-03-09 00:12:30.094739 | orchestrator | 2026-03-09 00:12:30.270384 | orchestrator | ok: Runtime: 0:07:09.910738 2026-03-09 00:12:30.283420 | 2026-03-09 00:12:30.283571 | TASK [Point out that the log in on the manager is now possible] 2026-03-09 00:12:30.332986 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-03-09 00:12:30.345107 | 2026-03-09 00:12:30.345301 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-09 00:12:30.395107 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-03-09 00:12:30.405219 | 2026-03-09 00:12:30.405409 | TASK [Run manager part 1 + 2] 2026-03-09 00:12:31.263694 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-09 00:12:31.322296 | orchestrator | 2026-03-09 00:12:31.322368 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-03-09 00:12:31.322407 | orchestrator | 2026-03-09 00:12:31.322440 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-09 00:12:34.376774 | orchestrator | ok: [testbed-manager] 2026-03-09 00:12:34.376834 | orchestrator | 2026-03-09 00:12:34.376868 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-09 00:12:34.418748 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:12:34.418819 | orchestrator | 2026-03-09 00:12:34.418840 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-09 00:12:34.457682 | orchestrator | ok: [testbed-manager] 2026-03-09 00:12:34.457753 | orchestrator | 2026-03-09 00:12:34.457775 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-09 00:12:34.492750 | orchestrator | ok: [testbed-manager] 2026-03-09 00:12:34.492810 | orchestrator | 2026-03-09 00:12:34.492826 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-09 00:12:34.560203 | orchestrator | ok: [testbed-manager] 2026-03-09 00:12:34.560279 | orchestrator | 2026-03-09 00:12:34.560298 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-09 00:12:34.643901 | orchestrator | ok: [testbed-manager] 2026-03-09 00:12:34.643988 | orchestrator | 2026-03-09 00:12:34.644015 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-09 00:12:34.701738 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-03-09 00:12:34.701793 | orchestrator | 2026-03-09 00:12:34.701802 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-09 00:12:35.418848 | orchestrator | ok: [testbed-manager] 2026-03-09 00:12:35.418899 | orchestrator | 2026-03-09 00:12:35.418910 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-09 00:12:35.461338 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:12:35.461418 | orchestrator | 2026-03-09 00:12:35.461433 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-09 00:12:36.822102 | orchestrator | changed: [testbed-manager] 2026-03-09 00:12:36.822148 | orchestrator | 2026-03-09 00:12:36.822156 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-09 00:12:37.370645 | orchestrator | ok: [testbed-manager] 2026-03-09 00:12:37.370684 | orchestrator | 2026-03-09 00:12:37.370691 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-09 00:12:38.492510 | orchestrator | changed: [testbed-manager] 2026-03-09 00:12:38.492569 | orchestrator | 2026-03-09 00:12:38.492584 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-09 00:12:54.198535 | orchestrator | changed: [testbed-manager] 2026-03-09 00:12:54.198591 | orchestrator | 2026-03-09 00:12:54.198604 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-09 00:12:54.896509 | orchestrator | ok: [testbed-manager] 2026-03-09 00:12:54.896592 | orchestrator | 2026-03-09 00:12:54.896611 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-09 00:12:54.958134 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:12:54.958284 | orchestrator | 2026-03-09 00:12:54.958301 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-03-09 00:12:55.987489 | orchestrator | changed: [testbed-manager] 2026-03-09 00:12:55.987650 | orchestrator | 2026-03-09 00:12:55.987667 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-03-09 00:12:56.971765 | orchestrator | changed: [testbed-manager] 2026-03-09 00:12:56.971830 | orchestrator | 2026-03-09 00:12:56.971840 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-03-09 00:12:57.575716 | orchestrator | changed: [testbed-manager] 2026-03-09 00:12:57.575806 | orchestrator | 2026-03-09 00:12:57.575824 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-03-09 00:12:57.617245 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-09 00:12:57.617335 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-09 00:12:57.617345 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-09 00:12:57.617353 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-09 00:13:01.390486 | orchestrator | changed: [testbed-manager] 2026-03-09 00:13:01.390559 | orchestrator | 2026-03-09 00:13:01.390576 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-03-09 00:13:11.123065 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-03-09 00:13:11.123131 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-03-09 00:13:11.123141 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-03-09 00:13:11.123148 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-03-09 00:13:11.123160 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-03-09 00:13:11.123167 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-03-09 00:13:11.123173 | orchestrator | 2026-03-09 00:13:11.123180 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-03-09 00:13:13.015558 | orchestrator | changed: [testbed-manager] 2026-03-09 00:13:13.015671 | orchestrator | 2026-03-09 00:13:13.015700 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-03-09 00:13:13.064732 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:13:13.064813 | orchestrator | 2026-03-09 00:13:13.064827 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-03-09 00:13:16.303611 | orchestrator | changed: [testbed-manager] 2026-03-09 00:13:16.304551 | orchestrator | 2026-03-09 00:13:16.304646 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-03-09 00:13:16.345926 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:13:16.346006 | orchestrator | 2026-03-09 00:13:16.346054 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-03-09 00:14:59.636438 | orchestrator | changed: [testbed-manager] 2026-03-09 00:14:59.636516 | orchestrator | 2026-03-09 00:14:59.636526 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-09 00:15:00.842722 | orchestrator | ok: [testbed-manager] 2026-03-09 00:15:00.842772 | orchestrator | 2026-03-09 00:15:00.842782 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:15:00.842790 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-03-09 00:15:00.842797 | orchestrator | 2026-03-09 00:15:01.042550 | orchestrator | ok: Runtime: 0:02:30.246983 2026-03-09 00:15:01.057328 | 2026-03-09 00:15:01.057465 | TASK [Reboot manager] 2026-03-09 00:15:02.599634 | orchestrator | ok: Runtime: 0:00:01.030208 2026-03-09 00:15:02.620047 | 2026-03-09 00:15:02.620325 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-09 00:15:19.072286 | orchestrator | ok 2026-03-09 00:15:19.085597 | 2026-03-09 00:15:19.085871 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-09 00:16:19.139344 | orchestrator | ok 2026-03-09 00:16:19.153956 | 2026-03-09 00:16:19.154207 | TASK [Deploy manager + bootstrap nodes] 2026-03-09 00:16:22.548613 | orchestrator | 2026-03-09 00:16:22.548813 | orchestrator | # DEPLOY MANAGER 2026-03-09 00:16:22.548839 | orchestrator | 2026-03-09 00:16:22.548854 | orchestrator | + set -e 2026-03-09 00:16:22.548867 | orchestrator | + echo 2026-03-09 00:16:22.548881 | orchestrator | + echo '# DEPLOY MANAGER' 2026-03-09 00:16:22.548899 | orchestrator | + echo 2026-03-09 00:16:22.548949 | orchestrator | + cat /opt/manager-vars.sh 2026-03-09 00:16:22.552244 | orchestrator | export NUMBER_OF_NODES=6 2026-03-09 00:16:22.552313 | orchestrator | 2026-03-09 00:16:22.552327 | orchestrator | export CEPH_VERSION=reef 2026-03-09 00:16:22.552338 | orchestrator | export CONFIGURATION_VERSION=main 2026-03-09 00:16:22.552348 | orchestrator | export MANAGER_VERSION=latest 2026-03-09 00:16:22.552371 | orchestrator | export OPENSTACK_VERSION=2025.1 2026-03-09 00:16:22.552379 | orchestrator | 2026-03-09 00:16:22.552393 | orchestrator | export ARA=false 2026-03-09 00:16:22.552402 | orchestrator | export DEPLOY_MODE=manager 2026-03-09 00:16:22.552416 | orchestrator | export TEMPEST=true 2026-03-09 00:16:22.552424 | orchestrator | export IS_ZUUL=true 2026-03-09 00:16:22.552433 | orchestrator | 2026-03-09 00:16:22.552446 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.217 2026-03-09 00:16:22.552455 | orchestrator | export EXTERNAL_API=false 2026-03-09 00:16:22.552463 | orchestrator | 2026-03-09 00:16:22.552471 | orchestrator | export IMAGE_USER=ubuntu 2026-03-09 00:16:22.552481 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-03-09 00:16:22.552489 | orchestrator | 2026-03-09 00:16:22.552497 | orchestrator | export CEPH_STACK=ceph-ansible 2026-03-09 00:16:22.552514 | orchestrator | 2026-03-09 00:16:22.552554 | orchestrator | + echo 2026-03-09 00:16:22.552575 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-09 00:16:22.554248 | orchestrator | ++ export INTERACTIVE=false 2026-03-09 00:16:22.554288 | orchestrator | ++ INTERACTIVE=false 2026-03-09 00:16:22.554298 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-09 00:16:22.554308 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-09 00:16:22.554316 | orchestrator | + source /opt/manager-vars.sh 2026-03-09 00:16:22.554324 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-09 00:16:22.554343 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-09 00:16:22.554358 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-09 00:16:22.554366 | orchestrator | ++ CEPH_VERSION=reef 2026-03-09 00:16:22.554374 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-09 00:16:22.554383 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-09 00:16:22.554397 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-09 00:16:22.554405 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-09 00:16:22.554413 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-03-09 00:16:22.554433 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-03-09 00:16:22.554441 | orchestrator | ++ export ARA=false 2026-03-09 00:16:22.554450 | orchestrator | ++ ARA=false 2026-03-09 00:16:22.554458 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-09 00:16:22.554466 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-09 00:16:22.554474 | orchestrator | ++ export TEMPEST=true 2026-03-09 00:16:22.554482 | orchestrator | ++ TEMPEST=true 2026-03-09 00:16:22.554490 | orchestrator | ++ export IS_ZUUL=true 2026-03-09 00:16:22.554498 | orchestrator | ++ IS_ZUUL=true 2026-03-09 00:16:22.554509 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.217 2026-03-09 00:16:22.554542 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.217 2026-03-09 00:16:22.554551 | orchestrator | ++ export EXTERNAL_API=false 2026-03-09 00:16:22.554559 | orchestrator | ++ EXTERNAL_API=false 2026-03-09 00:16:22.554567 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-09 00:16:22.554575 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-09 00:16:22.554583 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-09 00:16:22.554591 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-09 00:16:22.554599 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-09 00:16:22.554607 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-09 00:16:22.554615 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-03-09 00:16:22.612414 | orchestrator | + docker version 2026-03-09 00:16:22.737963 | orchestrator | Client: Docker Engine - Community 2026-03-09 00:16:22.738119 | orchestrator | Version: 27.5.1 2026-03-09 00:16:22.738137 | orchestrator | API version: 1.47 2026-03-09 00:16:22.738152 | orchestrator | Go version: go1.22.11 2026-03-09 00:16:22.738164 | orchestrator | Git commit: 9f9e405 2026-03-09 00:16:22.738176 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-09 00:16:22.738188 | orchestrator | OS/Arch: linux/amd64 2026-03-09 00:16:22.738199 | orchestrator | Context: default 2026-03-09 00:16:22.738220 | orchestrator | 2026-03-09 00:16:22.738232 | orchestrator | Server: Docker Engine - Community 2026-03-09 00:16:22.738243 | orchestrator | Engine: 2026-03-09 00:16:22.738254 | orchestrator | Version: 27.5.1 2026-03-09 00:16:22.738265 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-03-09 00:16:22.738306 | orchestrator | Go version: go1.22.11 2026-03-09 00:16:22.738318 | orchestrator | Git commit: 4c9b3b0 2026-03-09 00:16:22.738329 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-09 00:16:22.738340 | orchestrator | OS/Arch: linux/amd64 2026-03-09 00:16:22.738350 | orchestrator | Experimental: false 2026-03-09 00:16:22.738361 | orchestrator | containerd: 2026-03-09 00:16:22.738372 | orchestrator | Version: v2.2.1 2026-03-09 00:16:22.738384 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-03-09 00:16:22.738395 | orchestrator | runc: 2026-03-09 00:16:22.738406 | orchestrator | Version: 1.3.4 2026-03-09 00:16:22.738417 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-03-09 00:16:22.738428 | orchestrator | docker-init: 2026-03-09 00:16:22.738452 | orchestrator | Version: 0.19.0 2026-03-09 00:16:22.738464 | orchestrator | GitCommit: de40ad0 2026-03-09 00:16:22.740859 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-03-09 00:16:22.748887 | orchestrator | + set -e 2026-03-09 00:16:22.748914 | orchestrator | + source /opt/manager-vars.sh 2026-03-09 00:16:22.748925 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-09 00:16:22.748937 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-09 00:16:22.748948 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-09 00:16:22.748959 | orchestrator | ++ CEPH_VERSION=reef 2026-03-09 00:16:22.748970 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-09 00:16:22.748981 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-09 00:16:22.748992 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-09 00:16:22.749003 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-09 00:16:22.749014 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-03-09 00:16:22.749025 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-03-09 00:16:22.749036 | orchestrator | ++ export ARA=false 2026-03-09 00:16:22.749046 | orchestrator | ++ ARA=false 2026-03-09 00:16:22.749057 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-09 00:16:22.749068 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-09 00:16:22.749079 | orchestrator | ++ export TEMPEST=true 2026-03-09 00:16:22.749735 | orchestrator | ++ TEMPEST=true 2026-03-09 00:16:22.749751 | orchestrator | ++ export IS_ZUUL=true 2026-03-09 00:16:22.749762 | orchestrator | ++ IS_ZUUL=true 2026-03-09 00:16:22.749774 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.217 2026-03-09 00:16:22.749785 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.217 2026-03-09 00:16:22.749795 | orchestrator | ++ export EXTERNAL_API=false 2026-03-09 00:16:22.749806 | orchestrator | ++ EXTERNAL_API=false 2026-03-09 00:16:22.749817 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-09 00:16:22.749828 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-09 00:16:22.749838 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-09 00:16:22.749849 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-09 00:16:22.749861 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-09 00:16:22.749871 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-09 00:16:22.749882 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-09 00:16:22.749893 | orchestrator | ++ export INTERACTIVE=false 2026-03-09 00:16:22.749904 | orchestrator | ++ INTERACTIVE=false 2026-03-09 00:16:22.749914 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-09 00:16:22.749929 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-09 00:16:22.749940 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-09 00:16:22.749951 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-09 00:16:22.749962 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-03-09 00:16:22.757231 | orchestrator | + set -e 2026-03-09 00:16:22.757293 | orchestrator | + VERSION=reef 2026-03-09 00:16:22.758407 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-03-09 00:16:22.764339 | orchestrator | + [[ -n ceph_version: reef ]] 2026-03-09 00:16:22.764382 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-03-09 00:16:22.770318 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2025.1 2026-03-09 00:16:22.777671 | orchestrator | + set -e 2026-03-09 00:16:22.777741 | orchestrator | + VERSION=2025.1 2026-03-09 00:16:22.778679 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-03-09 00:16:22.782372 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-03-09 00:16:22.782448 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2025.1/g' /opt/configuration/environments/manager/configuration.yml 2026-03-09 00:16:22.787953 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-03-09 00:16:22.788505 | orchestrator | ++ semver latest 7.0.0 2026-03-09 00:16:22.855768 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-09 00:16:22.855880 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-09 00:16:22.855900 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-03-09 00:16:22.856597 | orchestrator | ++ semver latest 10.0.0-0 2026-03-09 00:16:22.921505 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-09 00:16:22.922177 | orchestrator | ++ semver 2025.1 2025.1 2026-03-09 00:16:23.009923 | orchestrator | + [[ 0 -ge 0 ]] 2026-03-09 00:16:23.009996 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-03-09 00:16:23.017305 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-03-09 00:16:23.022948 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-03-09 00:16:23.121486 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-09 00:16:23.122838 | orchestrator | + source /opt/venv/bin/activate 2026-03-09 00:16:23.124111 | orchestrator | ++ deactivate nondestructive 2026-03-09 00:16:23.124152 | orchestrator | ++ '[' -n '' ']' 2026-03-09 00:16:23.124158 | orchestrator | ++ '[' -n '' ']' 2026-03-09 00:16:23.124163 | orchestrator | ++ hash -r 2026-03-09 00:16:23.124167 | orchestrator | ++ '[' -n '' ']' 2026-03-09 00:16:23.124172 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-09 00:16:23.124176 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-09 00:16:23.124185 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-09 00:16:23.124190 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-09 00:16:23.124195 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-09 00:16:23.124204 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-09 00:16:23.124209 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-09 00:16:23.124292 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-09 00:16:23.124317 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-09 00:16:23.124322 | orchestrator | ++ export PATH 2026-03-09 00:16:23.124326 | orchestrator | ++ '[' -n '' ']' 2026-03-09 00:16:23.124428 | orchestrator | ++ '[' -z '' ']' 2026-03-09 00:16:23.124434 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-09 00:16:23.124438 | orchestrator | ++ PS1='(venv) ' 2026-03-09 00:16:23.124443 | orchestrator | ++ export PS1 2026-03-09 00:16:23.124448 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-09 00:16:23.124452 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-09 00:16:23.124479 | orchestrator | ++ hash -r 2026-03-09 00:16:23.124666 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-03-09 00:16:24.540983 | orchestrator | 2026-03-09 00:16:24.541101 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-03-09 00:16:24.541118 | orchestrator | 2026-03-09 00:16:24.541130 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-09 00:16:25.123803 | orchestrator | ok: [testbed-manager] 2026-03-09 00:16:25.123912 | orchestrator | 2026-03-09 00:16:25.123930 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-09 00:16:26.115822 | orchestrator | changed: [testbed-manager] 2026-03-09 00:16:26.115929 | orchestrator | 2026-03-09 00:16:26.115950 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-03-09 00:16:26.115964 | orchestrator | 2026-03-09 00:16:26.115976 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-09 00:16:28.613481 | orchestrator | ok: [testbed-manager] 2026-03-09 00:16:28.613613 | orchestrator | 2026-03-09 00:16:28.613635 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-03-09 00:16:28.674904 | orchestrator | ok: [testbed-manager] 2026-03-09 00:16:28.674977 | orchestrator | 2026-03-09 00:16:28.674984 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-03-09 00:16:29.160930 | orchestrator | changed: [testbed-manager] 2026-03-09 00:16:29.161038 | orchestrator | 2026-03-09 00:16:29.161056 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-03-09 00:16:29.207225 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:16:29.207350 | orchestrator | 2026-03-09 00:16:29.207377 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-09 00:16:29.606368 | orchestrator | changed: [testbed-manager] 2026-03-09 00:16:29.606493 | orchestrator | 2026-03-09 00:16:29.606515 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-03-09 00:16:29.955261 | orchestrator | ok: [testbed-manager] 2026-03-09 00:16:29.955388 | orchestrator | 2026-03-09 00:16:29.955416 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-03-09 00:16:30.068752 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:16:30.068847 | orchestrator | 2026-03-09 00:16:30.068862 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-03-09 00:16:30.068874 | orchestrator | 2026-03-09 00:16:30.068886 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-09 00:16:31.860389 | orchestrator | ok: [testbed-manager] 2026-03-09 00:16:31.860495 | orchestrator | 2026-03-09 00:16:31.860511 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-03-09 00:16:31.989083 | orchestrator | included: osism.services.traefik for testbed-manager 2026-03-09 00:16:31.989194 | orchestrator | 2026-03-09 00:16:31.989222 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-03-09 00:16:32.051819 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-03-09 00:16:32.051901 | orchestrator | 2026-03-09 00:16:32.051917 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-03-09 00:16:33.192904 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-03-09 00:16:33.193031 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-03-09 00:16:33.193048 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-03-09 00:16:33.193060 | orchestrator | 2026-03-09 00:16:33.193075 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-03-09 00:16:35.040190 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-03-09 00:16:35.040281 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-03-09 00:16:35.040294 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-03-09 00:16:35.040304 | orchestrator | 2026-03-09 00:16:35.040314 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-03-09 00:16:35.706003 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-09 00:16:35.706175 | orchestrator | changed: [testbed-manager] 2026-03-09 00:16:35.706202 | orchestrator | 2026-03-09 00:16:35.706235 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-03-09 00:16:36.365603 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-09 00:16:36.365718 | orchestrator | changed: [testbed-manager] 2026-03-09 00:16:36.365744 | orchestrator | 2026-03-09 00:16:36.365764 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-03-09 00:16:36.425468 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:16:36.425586 | orchestrator | 2026-03-09 00:16:36.425601 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-03-09 00:16:36.804714 | orchestrator | ok: [testbed-manager] 2026-03-09 00:16:36.804845 | orchestrator | 2026-03-09 00:16:36.804875 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-03-09 00:16:36.880467 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-03-09 00:16:36.880583 | orchestrator | 2026-03-09 00:16:36.880623 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-03-09 00:16:38.034673 | orchestrator | changed: [testbed-manager] 2026-03-09 00:16:38.034752 | orchestrator | 2026-03-09 00:16:38.034765 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-03-09 00:16:38.867841 | orchestrator | changed: [testbed-manager] 2026-03-09 00:16:38.867941 | orchestrator | 2026-03-09 00:16:38.867967 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-03-09 00:16:51.404126 | orchestrator | changed: [testbed-manager] 2026-03-09 00:16:51.404222 | orchestrator | 2026-03-09 00:16:51.404238 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-03-09 00:16:51.471620 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:16:51.471708 | orchestrator | 2026-03-09 00:16:51.471782 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-03-09 00:16:51.471859 | orchestrator | 2026-03-09 00:16:51.471873 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-09 00:16:53.372201 | orchestrator | ok: [testbed-manager] 2026-03-09 00:16:53.372300 | orchestrator | 2026-03-09 00:16:53.372315 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-03-09 00:16:53.503901 | orchestrator | included: osism.services.manager for testbed-manager 2026-03-09 00:16:53.503992 | orchestrator | 2026-03-09 00:16:53.504006 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-09 00:16:53.567102 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-09 00:16:53.567213 | orchestrator | 2026-03-09 00:16:53.567238 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-09 00:16:56.189923 | orchestrator | ok: [testbed-manager] 2026-03-09 00:16:56.190059 | orchestrator | 2026-03-09 00:16:56.190080 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-09 00:16:56.245133 | orchestrator | ok: [testbed-manager] 2026-03-09 00:16:56.245224 | orchestrator | 2026-03-09 00:16:56.245240 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-09 00:16:56.387835 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-09 00:16:56.387937 | orchestrator | 2026-03-09 00:16:56.387963 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-09 00:16:59.347844 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-03-09 00:16:59.347958 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-03-09 00:16:59.347972 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-09 00:16:59.347982 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-03-09 00:16:59.347992 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-09 00:16:59.348001 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-09 00:16:59.348010 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-09 00:16:59.348019 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-03-09 00:16:59.348028 | orchestrator | 2026-03-09 00:16:59.348039 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-09 00:17:00.005179 | orchestrator | changed: [testbed-manager] 2026-03-09 00:17:00.005271 | orchestrator | 2026-03-09 00:17:00.005288 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-09 00:17:00.683208 | orchestrator | changed: [testbed-manager] 2026-03-09 00:17:00.683326 | orchestrator | 2026-03-09 00:17:00.683344 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-09 00:17:00.762889 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-09 00:17:00.762985 | orchestrator | 2026-03-09 00:17:00.763005 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-09 00:17:01.989858 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-03-09 00:17:01.989973 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-03-09 00:17:01.989984 | orchestrator | 2026-03-09 00:17:01.989993 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-09 00:17:02.630405 | orchestrator | changed: [testbed-manager] 2026-03-09 00:17:02.630504 | orchestrator | 2026-03-09 00:17:02.630521 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-09 00:17:02.684715 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:17:02.684801 | orchestrator | 2026-03-09 00:17:02.684816 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-09 00:17:02.775365 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-09 00:17:02.775452 | orchestrator | 2026-03-09 00:17:02.775465 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-09 00:17:03.441007 | orchestrator | changed: [testbed-manager] 2026-03-09 00:17:03.441093 | orchestrator | 2026-03-09 00:17:03.441102 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-09 00:17:03.505416 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-09 00:17:03.505514 | orchestrator | 2026-03-09 00:17:03.505530 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-09 00:17:04.913259 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-09 00:17:04.913347 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-09 00:17:04.913362 | orchestrator | changed: [testbed-manager] 2026-03-09 00:17:04.913374 | orchestrator | 2026-03-09 00:17:04.913385 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-09 00:17:05.550310 | orchestrator | changed: [testbed-manager] 2026-03-09 00:17:05.550783 | orchestrator | 2026-03-09 00:17:05.550851 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-09 00:17:05.608960 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:17:05.609042 | orchestrator | 2026-03-09 00:17:05.609054 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-09 00:17:05.706643 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-09 00:17:05.706742 | orchestrator | 2026-03-09 00:17:05.706758 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-09 00:17:06.267778 | orchestrator | changed: [testbed-manager] 2026-03-09 00:17:06.267853 | orchestrator | 2026-03-09 00:17:06.267862 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-09 00:17:06.700717 | orchestrator | changed: [testbed-manager] 2026-03-09 00:17:06.700775 | orchestrator | 2026-03-09 00:17:06.700790 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-09 00:17:07.998744 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-03-09 00:17:07.998852 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-03-09 00:17:07.998869 | orchestrator | 2026-03-09 00:17:07.998882 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-09 00:17:08.665862 | orchestrator | changed: [testbed-manager] 2026-03-09 00:17:08.665965 | orchestrator | 2026-03-09 00:17:08.665982 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-09 00:17:09.052425 | orchestrator | ok: [testbed-manager] 2026-03-09 00:17:09.052526 | orchestrator | 2026-03-09 00:17:09.052568 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-09 00:17:09.410473 | orchestrator | changed: [testbed-manager] 2026-03-09 00:17:09.410602 | orchestrator | 2026-03-09 00:17:09.410620 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-09 00:17:09.450631 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:17:09.450725 | orchestrator | 2026-03-09 00:17:09.450745 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-09 00:17:09.538851 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-09 00:17:09.538940 | orchestrator | 2026-03-09 00:17:09.538953 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-09 00:17:09.592097 | orchestrator | ok: [testbed-manager] 2026-03-09 00:17:09.592188 | orchestrator | 2026-03-09 00:17:09.592205 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-09 00:17:11.608819 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-03-09 00:17:11.608942 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-03-09 00:17:11.608972 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-03-09 00:17:11.608985 | orchestrator | 2026-03-09 00:17:11.608998 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-09 00:17:12.326917 | orchestrator | changed: [testbed-manager] 2026-03-09 00:17:12.327018 | orchestrator | 2026-03-09 00:17:12.327036 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-09 00:17:13.033782 | orchestrator | changed: [testbed-manager] 2026-03-09 00:17:13.033926 | orchestrator | 2026-03-09 00:17:13.033945 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-09 00:17:13.759487 | orchestrator | changed: [testbed-manager] 2026-03-09 00:17:13.759654 | orchestrator | 2026-03-09 00:17:13.759683 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-09 00:17:13.831798 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-09 00:17:13.831921 | orchestrator | 2026-03-09 00:17:13.831945 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-09 00:17:13.887180 | orchestrator | ok: [testbed-manager] 2026-03-09 00:17:13.887287 | orchestrator | 2026-03-09 00:17:13.887313 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-09 00:17:14.628735 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-03-09 00:17:14.628837 | orchestrator | 2026-03-09 00:17:14.628853 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-09 00:17:14.712674 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-09 00:17:14.712791 | orchestrator | 2026-03-09 00:17:14.712809 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-09 00:17:15.465945 | orchestrator | changed: [testbed-manager] 2026-03-09 00:17:15.466086 | orchestrator | 2026-03-09 00:17:15.466106 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-09 00:17:16.068867 | orchestrator | ok: [testbed-manager] 2026-03-09 00:17:16.068952 | orchestrator | 2026-03-09 00:17:16.068964 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-09 00:17:16.136360 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:17:16.136443 | orchestrator | 2026-03-09 00:17:16.136458 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-09 00:17:16.198364 | orchestrator | ok: [testbed-manager] 2026-03-09 00:17:16.198468 | orchestrator | 2026-03-09 00:17:16.198485 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-09 00:17:17.077429 | orchestrator | changed: [testbed-manager] 2026-03-09 00:17:17.077533 | orchestrator | 2026-03-09 00:17:17.077578 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-09 00:18:29.856043 | orchestrator | changed: [testbed-manager] 2026-03-09 00:18:29.856157 | orchestrator | 2026-03-09 00:18:29.856176 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-09 00:18:31.876191 | orchestrator | ok: [testbed-manager] 2026-03-09 00:18:31.876294 | orchestrator | 2026-03-09 00:18:31.876311 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-09 00:18:31.936372 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:18:31.936465 | orchestrator | 2026-03-09 00:18:31.936502 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-09 00:18:34.442261 | orchestrator | changed: [testbed-manager] 2026-03-09 00:18:34.442408 | orchestrator | 2026-03-09 00:18:34.442440 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-09 00:18:34.556078 | orchestrator | ok: [testbed-manager] 2026-03-09 00:18:34.556212 | orchestrator | 2026-03-09 00:18:34.556241 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-09 00:18:34.556263 | orchestrator | 2026-03-09 00:18:34.556282 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-09 00:18:34.604848 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:18:34.604944 | orchestrator | 2026-03-09 00:18:34.604959 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-09 00:19:34.664853 | orchestrator | Pausing for 60 seconds 2026-03-09 00:19:34.664963 | orchestrator | changed: [testbed-manager] 2026-03-09 00:19:34.664976 | orchestrator | 2026-03-09 00:19:34.664987 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-09 00:19:38.261534 | orchestrator | changed: [testbed-manager] 2026-03-09 00:19:38.261678 | orchestrator | 2026-03-09 00:19:38.261697 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-09 00:20:40.262423 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-09 00:20:40.262557 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-09 00:20:40.262573 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-03-09 00:20:40.262586 | orchestrator | changed: [testbed-manager] 2026-03-09 00:20:40.262599 | orchestrator | 2026-03-09 00:20:40.262611 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-09 00:20:51.539193 | orchestrator | changed: [testbed-manager] 2026-03-09 00:20:51.539319 | orchestrator | 2026-03-09 00:20:51.539338 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-09 00:20:51.622063 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-09 00:20:51.622168 | orchestrator | 2026-03-09 00:20:51.622185 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-09 00:20:51.622198 | orchestrator | 2026-03-09 00:20:51.622209 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-09 00:20:51.679946 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:20:51.680042 | orchestrator | 2026-03-09 00:20:51.680058 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-09 00:20:51.759171 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-09 00:20:51.759246 | orchestrator | 2026-03-09 00:20:51.759254 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-09 00:20:52.617779 | orchestrator | changed: [testbed-manager] 2026-03-09 00:20:52.617885 | orchestrator | 2026-03-09 00:20:52.617902 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-09 00:20:55.939272 | orchestrator | ok: [testbed-manager] 2026-03-09 00:20:55.939397 | orchestrator | 2026-03-09 00:20:55.939418 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-09 00:20:56.009719 | orchestrator | ok: [testbed-manager] => { 2026-03-09 00:20:56.009816 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-09 00:20:56.009828 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-09 00:20:56.009836 | orchestrator | "Checking running containers against expected versions...", 2026-03-09 00:20:56.009844 | orchestrator | "", 2026-03-09 00:20:56.009852 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-09 00:20:56.009860 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-03-09 00:20:56.009868 | orchestrator | " Enabled: true", 2026-03-09 00:20:56.009875 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-03-09 00:20:56.009882 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:20:56.009889 | orchestrator | "", 2026-03-09 00:20:56.009896 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-09 00:20:56.009903 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-03-09 00:20:56.009910 | orchestrator | " Enabled: true", 2026-03-09 00:20:56.009917 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-03-09 00:20:56.009924 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:20:56.009931 | orchestrator | "", 2026-03-09 00:20:56.009938 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-09 00:20:56.009944 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-03-09 00:20:56.009950 | orchestrator | " Enabled: true", 2026-03-09 00:20:56.009956 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-03-09 00:20:56.009962 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:20:56.009968 | orchestrator | "", 2026-03-09 00:20:56.009975 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-09 00:20:56.009982 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-03-09 00:20:56.009988 | orchestrator | " Enabled: true", 2026-03-09 00:20:56.010057 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-03-09 00:20:56.010067 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:20:56.010073 | orchestrator | "", 2026-03-09 00:20:56.010080 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-09 00:20:56.010086 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2025.1", 2026-03-09 00:20:56.010092 | orchestrator | " Enabled: true", 2026-03-09 00:20:56.010098 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2025.1", 2026-03-09 00:20:56.010104 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:20:56.010111 | orchestrator | "", 2026-03-09 00:20:56.010117 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-09 00:20:56.010124 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-09 00:20:56.010130 | orchestrator | " Enabled: true", 2026-03-09 00:20:56.010137 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-09 00:20:56.010144 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:20:56.010151 | orchestrator | "", 2026-03-09 00:20:56.010158 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-09 00:20:56.010165 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-09 00:20:56.010172 | orchestrator | " Enabled: true", 2026-03-09 00:20:56.010178 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-09 00:20:56.010184 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:20:56.010191 | orchestrator | "", 2026-03-09 00:20:56.010198 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-09 00:20:56.010214 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-09 00:20:56.010222 | orchestrator | " Enabled: true", 2026-03-09 00:20:56.010229 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-09 00:20:56.010235 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:20:56.010246 | orchestrator | "", 2026-03-09 00:20:56.010253 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-09 00:20:56.010261 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-03-09 00:20:56.010268 | orchestrator | " Enabled: true", 2026-03-09 00:20:56.010274 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-03-09 00:20:56.010280 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:20:56.010287 | orchestrator | "", 2026-03-09 00:20:56.010294 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-09 00:20:56.010301 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-09 00:20:56.010309 | orchestrator | " Enabled: true", 2026-03-09 00:20:56.010317 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-09 00:20:56.010324 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:20:56.010330 | orchestrator | "", 2026-03-09 00:20:56.010337 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-09 00:20:56.010343 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-09 00:20:56.010350 | orchestrator | " Enabled: true", 2026-03-09 00:20:56.010357 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-09 00:20:56.010363 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:20:56.010370 | orchestrator | "", 2026-03-09 00:20:56.010376 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-09 00:20:56.010383 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-09 00:20:56.010390 | orchestrator | " Enabled: true", 2026-03-09 00:20:56.010396 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-09 00:20:56.010403 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:20:56.010410 | orchestrator | "", 2026-03-09 00:20:56.010416 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-09 00:20:56.010424 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-09 00:20:56.010430 | orchestrator | " Enabled: true", 2026-03-09 00:20:56.010437 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-09 00:20:56.010443 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:20:56.010459 | orchestrator | "", 2026-03-09 00:20:56.010466 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-09 00:20:56.010472 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-09 00:20:56.010479 | orchestrator | " Enabled: true", 2026-03-09 00:20:56.010485 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-09 00:20:56.010493 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:20:56.010500 | orchestrator | "", 2026-03-09 00:20:56.010507 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-09 00:20:56.010531 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-09 00:20:56.010538 | orchestrator | " Enabled: true", 2026-03-09 00:20:56.010545 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-09 00:20:56.010551 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:20:56.010558 | orchestrator | "", 2026-03-09 00:20:56.010565 | orchestrator | "=== Summary ===", 2026-03-09 00:20:56.010571 | orchestrator | "Errors (version mismatches): 0", 2026-03-09 00:20:56.010577 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-09 00:20:56.010584 | orchestrator | "", 2026-03-09 00:20:56.010591 | orchestrator | "✅ All running containers match expected versions!" 2026-03-09 00:20:56.010598 | orchestrator | ] 2026-03-09 00:20:56.010605 | orchestrator | } 2026-03-09 00:20:56.010611 | orchestrator | 2026-03-09 00:20:56.010618 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-09 00:20:56.073592 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:20:56.073734 | orchestrator | 2026-03-09 00:20:56.073753 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:20:56.073769 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-03-09 00:20:56.073781 | orchestrator | 2026-03-09 00:20:56.192422 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-09 00:20:56.192488 | orchestrator | + deactivate 2026-03-09 00:20:56.192494 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-09 00:20:56.192500 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-09 00:20:56.192504 | orchestrator | + export PATH 2026-03-09 00:20:56.192508 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-09 00:20:56.192513 | orchestrator | + '[' -n '' ']' 2026-03-09 00:20:56.192517 | orchestrator | + hash -r 2026-03-09 00:20:56.192522 | orchestrator | + '[' -n '' ']' 2026-03-09 00:20:56.192526 | orchestrator | + unset VIRTUAL_ENV 2026-03-09 00:20:56.192529 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-09 00:20:56.192534 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-09 00:20:56.192537 | orchestrator | + unset -f deactivate 2026-03-09 00:20:56.192542 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-03-09 00:20:56.199127 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-09 00:20:56.199167 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-09 00:20:56.199172 | orchestrator | + local max_attempts=60 2026-03-09 00:20:56.199177 | orchestrator | + local name=ceph-ansible 2026-03-09 00:20:56.199181 | orchestrator | + local attempt_num=1 2026-03-09 00:20:56.199715 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:20:56.238958 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-09 00:20:56.239079 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-09 00:20:56.239095 | orchestrator | + local max_attempts=60 2026-03-09 00:20:56.239108 | orchestrator | + local name=kolla-ansible 2026-03-09 00:20:56.239120 | orchestrator | + local attempt_num=1 2026-03-09 00:20:56.239525 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-09 00:20:56.270790 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-09 00:20:56.270897 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-09 00:20:56.270919 | orchestrator | + local max_attempts=60 2026-03-09 00:20:56.270937 | orchestrator | + local name=osism-ansible 2026-03-09 00:20:56.270954 | orchestrator | + local attempt_num=1 2026-03-09 00:20:56.270971 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-09 00:20:56.304598 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-09 00:20:56.304751 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-09 00:20:56.304793 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-09 00:20:56.977073 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-09 00:20:57.163143 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-09 00:20:57.163242 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-03-09 00:20:57.163257 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2025.1 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-03-09 00:20:57.163269 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-03-09 00:20:57.163283 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-03-09 00:20:57.163297 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-03-09 00:20:57.163315 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-03-09 00:20:57.163362 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-03-09 00:20:57.163382 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-03-09 00:20:57.163408 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-03-09 00:20:57.163432 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-03-09 00:20:57.163451 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-03-09 00:20:57.163469 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-03-09 00:20:57.163487 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-03-09 00:20:57.163507 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-03-09 00:20:57.163527 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-03-09 00:20:57.169295 | orchestrator | ++ semver latest 7.0.0 2026-03-09 00:20:57.223240 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-09 00:20:57.223337 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-09 00:20:57.223355 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-03-09 00:20:57.226915 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-03-09 00:21:09.379432 | orchestrator | 2026-03-09 00:21:09 | INFO  | Prepare task for execution of resolvconf. 2026-03-09 00:21:09.588827 | orchestrator | 2026-03-09 00:21:09 | INFO  | Task d1994e45-89e7-4c10-8a21-4eb9b187d11b (resolvconf) was prepared for execution. 2026-03-09 00:21:09.588922 | orchestrator | 2026-03-09 00:21:09 | INFO  | It takes a moment until task d1994e45-89e7-4c10-8a21-4eb9b187d11b (resolvconf) has been started and output is visible here. 2026-03-09 00:21:23.231027 | orchestrator | 2026-03-09 00:21:23.231129 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-03-09 00:21:23.231146 | orchestrator | 2026-03-09 00:21:23.231158 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-09 00:21:23.231169 | orchestrator | Monday 09 March 2026 00:21:13 +0000 (0:00:00.150) 0:00:00.150 ********** 2026-03-09 00:21:23.231180 | orchestrator | ok: [testbed-manager] 2026-03-09 00:21:23.231192 | orchestrator | 2026-03-09 00:21:23.231203 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-09 00:21:23.231214 | orchestrator | Monday 09 March 2026 00:21:17 +0000 (0:00:03.905) 0:00:04.055 ********** 2026-03-09 00:21:23.231225 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:21:23.231236 | orchestrator | 2026-03-09 00:21:23.231247 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-09 00:21:23.231258 | orchestrator | Monday 09 March 2026 00:21:17 +0000 (0:00:00.068) 0:00:04.124 ********** 2026-03-09 00:21:23.231269 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-03-09 00:21:23.231280 | orchestrator | 2026-03-09 00:21:23.231291 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-09 00:21:23.231312 | orchestrator | Monday 09 March 2026 00:21:17 +0000 (0:00:00.071) 0:00:04.196 ********** 2026-03-09 00:21:23.231323 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-03-09 00:21:23.231335 | orchestrator | 2026-03-09 00:21:23.231345 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-09 00:21:23.231356 | orchestrator | Monday 09 March 2026 00:21:18 +0000 (0:00:00.078) 0:00:04.274 ********** 2026-03-09 00:21:23.231367 | orchestrator | ok: [testbed-manager] 2026-03-09 00:21:23.231378 | orchestrator | 2026-03-09 00:21:23.231389 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-09 00:21:23.231400 | orchestrator | Monday 09 March 2026 00:21:18 +0000 (0:00:00.957) 0:00:05.231 ********** 2026-03-09 00:21:23.231410 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:21:23.231421 | orchestrator | 2026-03-09 00:21:23.231432 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-09 00:21:23.231443 | orchestrator | Monday 09 March 2026 00:21:19 +0000 (0:00:00.043) 0:00:05.274 ********** 2026-03-09 00:21:23.231453 | orchestrator | ok: [testbed-manager] 2026-03-09 00:21:23.231464 | orchestrator | 2026-03-09 00:21:23.231475 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-09 00:21:23.231485 | orchestrator | Monday 09 March 2026 00:21:19 +0000 (0:00:00.473) 0:00:05.748 ********** 2026-03-09 00:21:23.231496 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:21:23.231507 | orchestrator | 2026-03-09 00:21:23.231518 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-09 00:21:23.231529 | orchestrator | Monday 09 March 2026 00:21:19 +0000 (0:00:00.073) 0:00:05.822 ********** 2026-03-09 00:21:23.231540 | orchestrator | changed: [testbed-manager] 2026-03-09 00:21:23.231551 | orchestrator | 2026-03-09 00:21:23.231561 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-09 00:21:23.231573 | orchestrator | Monday 09 March 2026 00:21:20 +0000 (0:00:00.486) 0:00:06.308 ********** 2026-03-09 00:21:23.231586 | orchestrator | changed: [testbed-manager] 2026-03-09 00:21:23.231598 | orchestrator | 2026-03-09 00:21:23.231635 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-09 00:21:23.231671 | orchestrator | Monday 09 March 2026 00:21:21 +0000 (0:00:00.992) 0:00:07.301 ********** 2026-03-09 00:21:23.231685 | orchestrator | ok: [testbed-manager] 2026-03-09 00:21:23.231696 | orchestrator | 2026-03-09 00:21:23.231707 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-09 00:21:23.231717 | orchestrator | Monday 09 March 2026 00:21:21 +0000 (0:00:00.881) 0:00:08.183 ********** 2026-03-09 00:21:23.231728 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-03-09 00:21:23.231739 | orchestrator | 2026-03-09 00:21:23.231750 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-09 00:21:23.231760 | orchestrator | Monday 09 March 2026 00:21:22 +0000 (0:00:00.101) 0:00:08.284 ********** 2026-03-09 00:21:23.231774 | orchestrator | changed: [testbed-manager] 2026-03-09 00:21:23.231792 | orchestrator | 2026-03-09 00:21:23.231811 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:21:23.231830 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-09 00:21:23.231849 | orchestrator | 2026-03-09 00:21:23.231860 | orchestrator | 2026-03-09 00:21:23.231871 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:21:23.231882 | orchestrator | Monday 09 March 2026 00:21:23 +0000 (0:00:01.059) 0:00:09.344 ********** 2026-03-09 00:21:23.231892 | orchestrator | =============================================================================== 2026-03-09 00:21:23.231903 | orchestrator | Gathering Facts --------------------------------------------------------- 3.91s 2026-03-09 00:21:23.231914 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.06s 2026-03-09 00:21:23.231924 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 0.99s 2026-03-09 00:21:23.231935 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 0.96s 2026-03-09 00:21:23.231946 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.88s 2026-03-09 00:21:23.231956 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.49s 2026-03-09 00:21:23.231984 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.47s 2026-03-09 00:21:23.231995 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.10s 2026-03-09 00:21:23.232006 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2026-03-09 00:21:23.232017 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.07s 2026-03-09 00:21:23.232028 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.07s 2026-03-09 00:21:23.232045 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-03-09 00:21:23.232056 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.04s 2026-03-09 00:21:23.433094 | orchestrator | + osism apply sshconfig 2026-03-09 00:21:35.338579 | orchestrator | 2026-03-09 00:21:35 | INFO  | Prepare task for execution of sshconfig. 2026-03-09 00:21:35.420873 | orchestrator | 2026-03-09 00:21:35 | INFO  | Task c524b1f0-cc66-4f14-b0ee-64bb10682a5b (sshconfig) was prepared for execution. 2026-03-09 00:21:35.420988 | orchestrator | 2026-03-09 00:21:35 | INFO  | It takes a moment until task c524b1f0-cc66-4f14-b0ee-64bb10682a5b (sshconfig) has been started and output is visible here. 2026-03-09 00:21:47.483043 | orchestrator | 2026-03-09 00:21:47.483151 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-03-09 00:21:47.483167 | orchestrator | 2026-03-09 00:21:47.483179 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-03-09 00:21:47.483191 | orchestrator | Monday 09 March 2026 00:21:39 +0000 (0:00:00.167) 0:00:00.167 ********** 2026-03-09 00:21:47.483229 | orchestrator | ok: [testbed-manager] 2026-03-09 00:21:47.483242 | orchestrator | 2026-03-09 00:21:47.483253 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-03-09 00:21:47.483264 | orchestrator | Monday 09 March 2026 00:21:40 +0000 (0:00:00.521) 0:00:00.688 ********** 2026-03-09 00:21:47.483275 | orchestrator | changed: [testbed-manager] 2026-03-09 00:21:47.483286 | orchestrator | 2026-03-09 00:21:47.483297 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-03-09 00:21:47.483308 | orchestrator | Monday 09 March 2026 00:21:40 +0000 (0:00:00.501) 0:00:01.190 ********** 2026-03-09 00:21:47.483318 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-03-09 00:21:47.483330 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-03-09 00:21:47.483340 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-03-09 00:21:47.483351 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-03-09 00:21:47.483361 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-03-09 00:21:47.483372 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-03-09 00:21:47.483382 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-03-09 00:21:47.483393 | orchestrator | 2026-03-09 00:21:47.483404 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-03-09 00:21:47.483415 | orchestrator | Monday 09 March 2026 00:21:46 +0000 (0:00:05.839) 0:00:07.029 ********** 2026-03-09 00:21:47.483425 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:21:47.483436 | orchestrator | 2026-03-09 00:21:47.483447 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-03-09 00:21:47.483458 | orchestrator | Monday 09 March 2026 00:21:46 +0000 (0:00:00.084) 0:00:07.114 ********** 2026-03-09 00:21:47.483469 | orchestrator | changed: [testbed-manager] 2026-03-09 00:21:47.483479 | orchestrator | 2026-03-09 00:21:47.483490 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:21:47.483502 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:21:47.483514 | orchestrator | 2026-03-09 00:21:47.483525 | orchestrator | 2026-03-09 00:21:47.483535 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:21:47.483546 | orchestrator | Monday 09 March 2026 00:21:47 +0000 (0:00:00.568) 0:00:07.683 ********** 2026-03-09 00:21:47.483557 | orchestrator | =============================================================================== 2026-03-09 00:21:47.483568 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.84s 2026-03-09 00:21:47.483578 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.57s 2026-03-09 00:21:47.483591 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.52s 2026-03-09 00:21:47.483604 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.50s 2026-03-09 00:21:47.483617 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2026-03-09 00:21:47.797385 | orchestrator | + osism apply known-hosts 2026-03-09 00:21:59.907833 | orchestrator | 2026-03-09 00:21:59 | INFO  | Prepare task for execution of known-hosts. 2026-03-09 00:21:59.985307 | orchestrator | 2026-03-09 00:21:59 | INFO  | Task e677a625-6db4-48ae-bcab-6db2fdb9cedd (known-hosts) was prepared for execution. 2026-03-09 00:21:59.985403 | orchestrator | 2026-03-09 00:21:59 | INFO  | It takes a moment until task e677a625-6db4-48ae-bcab-6db2fdb9cedd (known-hosts) has been started and output is visible here. 2026-03-09 00:22:16.263660 | orchestrator | 2026-03-09 00:22:16.263832 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-03-09 00:22:16.263853 | orchestrator | 2026-03-09 00:22:16.263865 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-03-09 00:22:16.263899 | orchestrator | Monday 09 March 2026 00:22:04 +0000 (0:00:00.167) 0:00:00.167 ********** 2026-03-09 00:22:16.263911 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-09 00:22:16.263923 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-09 00:22:16.263933 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-09 00:22:16.263945 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-09 00:22:16.263955 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-09 00:22:16.263966 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-09 00:22:16.263986 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-09 00:22:16.263998 | orchestrator | 2026-03-09 00:22:16.264010 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-03-09 00:22:16.264022 | orchestrator | Monday 09 March 2026 00:22:10 +0000 (0:00:06.084) 0:00:06.252 ********** 2026-03-09 00:22:16.264035 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-09 00:22:16.264048 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-09 00:22:16.264059 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-09 00:22:16.264070 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-09 00:22:16.264080 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-09 00:22:16.264091 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-09 00:22:16.264102 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-09 00:22:16.264113 | orchestrator | 2026-03-09 00:22:16.264124 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:22:16.264135 | orchestrator | Monday 09 March 2026 00:22:10 +0000 (0:00:00.174) 0:00:06.426 ********** 2026-03-09 00:22:16.264146 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII6Sc8+v+JcJNzMVm5lA9AtohvIpUbjdsrEOJ0Ci5a+p) 2026-03-09 00:22:16.264162 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7iaw4G2oGOh9PfI3Z1apR7yy0VXEVBI6tcLwxHkUSl9Xqq5SAZFBE0Dq/VXyTKFspRMZy3hg4/4TjlvVWn/uM4GKbBguVsGT+MNnMm/RvGppS/FVEOmu/o077N1CEaa1OItQ4Y0C9gjb76hdoNTbwrIM41zp09NGsd9cvk/C1i5q8US6ndJlde481/5BcFy2ukkBKrXg+oOVJkYuGa8Mxjzina0O0418ddjU15KKaXKBpXK+5qhRRnqVIle9MlbGiTB5TKnpcHmZ/xyjCOLPBZglbrABqKs7PDwexbhK10AK+ZNZCKDPDY9OBE/+yUpvIULuOfhqLavLGu8O5dhdNsNR4yFG2A8apiZxsRvboV9KNiRDkC7j2AMQXdsrs6NlUP0hd7XmXgIzNFSU2JOcIs0qfHuXDwQtDU/3x1ZSYulehrxwje4UqW5H7ay044t9vN63QqTX+8yofSEZbrkT44kBScdJk+fa+Lm21XOsucRiGS9YaU4Lu6zE0YFG00VE=) 2026-03-09 00:22:16.264177 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM55ciHxsOsfYakncK8lt5O2lYRHMghcNk8Y6dpTDHsLGqdNxAbxJyYGNElcmcVPcqAxjVl6eOCuRztlT0ULt+s=) 2026-03-09 00:22:16.264191 | orchestrator | 2026-03-09 00:22:16.264202 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:22:16.264213 | orchestrator | Monday 09 March 2026 00:22:11 +0000 (0:00:01.189) 0:00:07.615 ********** 2026-03-09 00:22:16.264254 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5PfM0jCQi7plCN3rb0KEZzJdlv3lSrhLaJuQrFAPKzLDDHS4ilW6n2tF/XgYBHllGVT4sl5bOeSNK0RffrdVfso2id4hLVc1gHH5TJGpFvvkEr81J1jSqk125ABiaR+DsrpqBk+X5T3/vAVaaqLu2qguRYfLN7yPFSc6wKXimGOCq1vLtQWC3Bk5qSMC8FQFRT2DHFk93cxhIe4LiNSc9qxKsNMlP1PJaUIwubjka+Ul+q9DMst8ziy6Txs/UjhazN78FZbrZLrX3JvTQY5aIjN4Cx8K6GxjO5vTAO3fVBhy5QuBGjJ8nrr1UlMJVctQPGBW+7j1S1ONqAJh3pR2WUyPQu4Zp2OoFxYkBDJAOr6LV8v34hBC9TLMSRCHD3+5vLxSnFacTqhD8dPOEf2u8QLwrmZQz5xXcBaMTqW8GUkw8ZwzklIoYmkYMl6StqmKZq1Bqew5cYxYto1UZi5eVxU4SoSZTdEgN78DvpdA1BCM++O5kJ1hByJSYK2IQey0=) 2026-03-09 00:22:16.264268 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDahO141qtdT2zvhjE9SsJQ+TpdHdOUNwgoeoHKpMyrlN8XJiK0qY/D/UyICsbJqijMi3HI9cmN1d/Xg0kblG00=) 2026-03-09 00:22:16.264279 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEEJX1XocvZloOL7rYrZ9LX+2bgqWeJaH3bQmYoPPqjv) 2026-03-09 00:22:16.264290 | orchestrator | 2026-03-09 00:22:16.264301 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:22:16.264312 | orchestrator | Monday 09 March 2026 00:22:12 +0000 (0:00:01.065) 0:00:08.681 ********** 2026-03-09 00:22:16.264382 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtLyumXHFmKnnYhm8OZYXjq4opPs5avI2wK40aNTU/FZYr/Ekityj2y44JVCqNj1pih7BGs1v/75Ky4932rrRjzjVbdlpFctEOdZgNkCUNaUWt23o+bcBPqwzhmuDIhUBEg0nXg9XxJl27Io2WaXttGbrwpAjKdEvRMAQVIuRNCIUUi2iIUovwtS4zY1281rOPTuiFKoZgTWg5FKTDhUKsAz1NSiC6eVapoQat0v/hiC04JWRxgtY10Jicf3B65pTrmKRlR7q07h2awTe2mDGaKIwVjhy003cNPMcYDYpAVHYaNsgWtA5sAC1JDth+GBwHuaEwN9BYrEv3P/YHuCMUhCymMnuUm6xFfaqp+caWqN/fuLAehLbN+Zmq5e3UCXUrK6gqfBx+ldbY4axI3HhdQH97qYm3eEg61W0ASx4j4T46rnvHP3jX0Klm/w5GzJOXm0FazA+MIE0VMUvoN/CLBG1LcbZbzWEXxkHGDxse6USZg8YokCLVIDRvWYumInE=) 2026-03-09 00:22:16.264394 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLfUTiHh73mRVBnWc11pxfh5sdNihsAJdFzjXg8W+Ffp6YoKs0QvURd+jvOU5+1itIEWB7v5TVk/ok9AsywrlnE=) 2026-03-09 00:22:16.264405 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL1D1vgbXWZPfy3xnDFaLslZJGtrY4ietVqAy4asgnih) 2026-03-09 00:22:16.264416 | orchestrator | 2026-03-09 00:22:16.264427 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:22:16.264438 | orchestrator | Monday 09 March 2026 00:22:13 +0000 (0:00:01.062) 0:00:09.744 ********** 2026-03-09 00:22:16.264449 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCe+L8hEdA8OOHEVPFvoYZkLdEAg0kpqsO8YAMj8EawVYXiCAOgn4hatqSgVu1+E/W0jXBE+VaBvafJe0CMf9b+GNb5RapyWm70OaFqE19tz4EBzHlYUzfnkW0Ai3vD+LbtZ8s70Uw3AohZ6G+0a3r5f5a23R9oqJG6r0g5sY3cKqSLWAfYtUK4uuFF9Nk2NrvThP+KiEtSn6wXooORWNdBk1Sfozr5F71WrnrijO6NlefE1THYVCV4FT9QS7LQQt7XeJYp14RFIcX8GxJE9bBcfgbp5yBsozecREIT29FbP968xoDz7pSmxVEBP1xyr1trnhPbG1R5LZZoMnlfr3GICRIi05ua5lc9XnH7FRWrTrtVJkQuQ00IO8R2GZZlVfno01kuGW294Hg4XYavhI8AUmujcDiOXt4Tj4FHTFshtih0YiUBG/2U4TWO+lt9boJhXceNEVB82c3xxmE5TB0anuwgW+Qh/wfSdwlzoAMQ95RQfyVB58aOoACuH0WOYgc=) 2026-03-09 00:22:16.264461 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAWamUs0tOfuq5LgSxMFL8m3BuLzy7vlPTv40rLqvjAPHGY3ZYRhHnYlL+1HCtUYqITQa602FVGxaepKek/QezI=) 2026-03-09 00:22:16.264472 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGMUJ4y2yudSfyyvO1SoE3FDXvw+oOpa92WqPUbEYz3Z) 2026-03-09 00:22:16.264483 | orchestrator | 2026-03-09 00:22:16.264494 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:22:16.264505 | orchestrator | Monday 09 March 2026 00:22:14 +0000 (0:00:01.073) 0:00:10.817 ********** 2026-03-09 00:22:16.264516 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDGTDXAzSPDYIihc0x3DgolHA4aoKiK16BhyhklT5oT1SIkrAtwBK9e+3RctmMXQFiCbwadvFGLvOUx5ShDyXCW15QwbjfLVbadPl6su0JX+D18CL6TFkZtxz65zxbfTc+uA1D67EqHkFnU5ySwtwRxjcVGahLBiuz1+yIzBhwuF6J2ggfdrBwB1HUJYdBR+NvqoSBf0t3MrfscNTX4Novk/cDejiIACYTsWQstxZpKCN8uLfT1qgHDjzMxTquybceObE4LHYuRv1RClpYpeKNpBZYJ1aIvtlSH+ra1m1ynnorl3ksozWhAR1l/lImDBSaLitP+iQLNnEZI51sNGFiH0bvYFgxzMxXp2AwgFs3w2gQGxbQDXQcHoRXRHuWQIG2ymHb4G9KYCkCynrJQHbql3bD1XhhdlpZXPDpN/1+Q5o/IqyCggHrgpvgRiLpo+uv3NtcCucbf53zKqc3dCDPKb7h2gNl4fINCHOPPpP1ur12QimFVn7Q+yNnQzxPBZXs=) 2026-03-09 00:22:16.264534 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBARQ3Mld+uQKLqu9Msw8+q1a/vhEQqdv8jJ3riY0nsNkCen3HR3ei4Y4ufFPEHfJGXTxJJeIEroIDckW0IVus/c=) 2026-03-09 00:22:16.264546 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFONyN8xhmQyHID+dQNMCxU0kp+8mxOIL2T0B4fr6pOo) 2026-03-09 00:22:16.264557 | orchestrator | 2026-03-09 00:22:16.264567 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:22:16.264578 | orchestrator | Monday 09 March 2026 00:22:15 +0000 (0:00:01.063) 0:00:11.880 ********** 2026-03-09 00:22:16.264595 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEAwgFqFEne4JrJSLn2YDKoqaOdV+frk0PjK+eVnd79q) 2026-03-09 00:22:27.154966 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyWuNcHvhkdV6gXrlxSOfAq2+SqDhjRo33ZXUiyJ+Te3kSCPpiVQxOfSRyNrvGmtwSc+QKWU6PZ7BBaOPv6O1Cj20biuJggI4NVc/qc5D+aREFYzxPbf1C56rAC1O5ZT7je0Nad1Nnj3OAoTitHd3BeSk120pF4trFWkCj45DpYAqeW71KjnXtJw/8QEiuYFdRa+2Cihyqf+0Fb+DGTkhG9Y2pptVuv+yLK8RPpZw4Z4899WI3ywPy8ny26+IQ/dIJ7bGeETt44Zm/IXDFH3T2BdXCmrdH+CMd1ikH0RCt495E8eXphn54EGURv5qm+c3rVYK27RKrZjUINhLPW6W+hL41ElLImB+aZDGzeUuIG0Ikke/Q4vb3uQaAUIlAx4ptX2Ez6h9Z2sacMQJI/drMFcjek5Mdc+69jx07MkAMHEkGMhR09/gQdXQE7WqY0ITbb6OEOt53oAiJKrW0GEgWWFEql8Ax1aLNrrki1BZHoG0AhrZxjIvThQrIf92ysxc=) 2026-03-09 00:22:27.155081 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIp6xFDVzcyA+//S64g7Dv8qTwnqsHp4lJQRM8u5i793s2oSNcpX5hlVGr7xSpYQSFPF+bzkTCG4WbDsSA0FJiA=) 2026-03-09 00:22:27.155098 | orchestrator | 2026-03-09 00:22:27.155110 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:22:27.155121 | orchestrator | Monday 09 March 2026 00:22:16 +0000 (0:00:01.093) 0:00:12.974 ********** 2026-03-09 00:22:27.155130 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO4tpxqpoEbPGXoMpw5fssAOLoHcCv8QJj6abuntECoSGytuOd4eMeI+/eOX7B/EEbBDc7FfqdYe25buW6sxfVg=) 2026-03-09 00:22:27.155141 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDsr6HyY4jgTQTm5cgpj9F1RQQ7eL7BnKaIdGe7HFcrMpuEYwffnj3MWRsoH6sl2HRZNquy9Rsg5Sw6t6Be+nx5rmbI2F2wQX+0OeR84ut0NLwaTb7T6do6mkcCoZ7cnWy7cVa1h6DXDA+QabgZ6nS6KuIoimCIrrpU6xqL1DVptuVC5kam0CieBjKrAHD+ErhFGZYHFPcqaMjYVkCKjGyk4W3nn1/Ya2moRGMwZCpHosETSXiKYKFCjtjxDuWmu5S5hKOSZ38nz3WIAeHZKTM896ikj6wib7aClf9Zurm2bYXrKo8z5HvK9hxfBBBhTeI0ZIHHR6QrnZjOT6NiRCjqMb0MHBHSkCwcyuyEWFBbj0sdBDVlClB3mTMpy+H103g/YDNWUMiv6FHzzteVyQgunziX4jvtRaWaZrRnZ3N+vKELR9PKnV8Rs+ToatFXqHNAatMdBjBBh57rZBsnmgUKB0dHEUWywTJkVIPIO00CxGIGC7H2GknU3091+vmV/pM=) 2026-03-09 00:22:27.155152 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL1q3tGd8j2DXqrL1i+FB1PdoVTZTbaThgNIFeanKAd1) 2026-03-09 00:22:27.155163 | orchestrator | 2026-03-09 00:22:27.155173 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-03-09 00:22:27.155201 | orchestrator | Monday 09 March 2026 00:22:18 +0000 (0:00:01.082) 0:00:14.056 ********** 2026-03-09 00:22:27.155212 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-09 00:22:27.155222 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-09 00:22:27.155232 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-09 00:22:27.155264 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-09 00:22:27.155274 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-09 00:22:27.155283 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-09 00:22:27.155293 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-09 00:22:27.155302 | orchestrator | 2026-03-09 00:22:27.155312 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-03-09 00:22:27.155322 | orchestrator | Monday 09 March 2026 00:22:23 +0000 (0:00:05.207) 0:00:19.263 ********** 2026-03-09 00:22:27.155333 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-09 00:22:27.155344 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-09 00:22:27.155354 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-09 00:22:27.155363 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-09 00:22:27.155373 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-09 00:22:27.155382 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-09 00:22:27.155408 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-09 00:22:27.155418 | orchestrator | 2026-03-09 00:22:27.155428 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:22:27.155438 | orchestrator | Monday 09 March 2026 00:22:23 +0000 (0:00:00.175) 0:00:19.439 ********** 2026-03-09 00:22:27.155448 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7iaw4G2oGOh9PfI3Z1apR7yy0VXEVBI6tcLwxHkUSl9Xqq5SAZFBE0Dq/VXyTKFspRMZy3hg4/4TjlvVWn/uM4GKbBguVsGT+MNnMm/RvGppS/FVEOmu/o077N1CEaa1OItQ4Y0C9gjb76hdoNTbwrIM41zp09NGsd9cvk/C1i5q8US6ndJlde481/5BcFy2ukkBKrXg+oOVJkYuGa8Mxjzina0O0418ddjU15KKaXKBpXK+5qhRRnqVIle9MlbGiTB5TKnpcHmZ/xyjCOLPBZglbrABqKs7PDwexbhK10AK+ZNZCKDPDY9OBE/+yUpvIULuOfhqLavLGu8O5dhdNsNR4yFG2A8apiZxsRvboV9KNiRDkC7j2AMQXdsrs6NlUP0hd7XmXgIzNFSU2JOcIs0qfHuXDwQtDU/3x1ZSYulehrxwje4UqW5H7ay044t9vN63QqTX+8yofSEZbrkT44kBScdJk+fa+Lm21XOsucRiGS9YaU4Lu6zE0YFG00VE=) 2026-03-09 00:22:27.155458 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM55ciHxsOsfYakncK8lt5O2lYRHMghcNk8Y6dpTDHsLGqdNxAbxJyYGNElcmcVPcqAxjVl6eOCuRztlT0ULt+s=) 2026-03-09 00:22:27.155468 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII6Sc8+v+JcJNzMVm5lA9AtohvIpUbjdsrEOJ0Ci5a+p) 2026-03-09 00:22:27.155477 | orchestrator | 2026-03-09 00:22:27.155487 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:22:27.155497 | orchestrator | Monday 09 March 2026 00:22:24 +0000 (0:00:00.993) 0:00:20.432 ********** 2026-03-09 00:22:27.155508 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5PfM0jCQi7plCN3rb0KEZzJdlv3lSrhLaJuQrFAPKzLDDHS4ilW6n2tF/XgYBHllGVT4sl5bOeSNK0RffrdVfso2id4hLVc1gHH5TJGpFvvkEr81J1jSqk125ABiaR+DsrpqBk+X5T3/vAVaaqLu2qguRYfLN7yPFSc6wKXimGOCq1vLtQWC3Bk5qSMC8FQFRT2DHFk93cxhIe4LiNSc9qxKsNMlP1PJaUIwubjka+Ul+q9DMst8ziy6Txs/UjhazN78FZbrZLrX3JvTQY5aIjN4Cx8K6GxjO5vTAO3fVBhy5QuBGjJ8nrr1UlMJVctQPGBW+7j1S1ONqAJh3pR2WUyPQu4Zp2OoFxYkBDJAOr6LV8v34hBC9TLMSRCHD3+5vLxSnFacTqhD8dPOEf2u8QLwrmZQz5xXcBaMTqW8GUkw8ZwzklIoYmkYMl6StqmKZq1Bqew5cYxYto1UZi5eVxU4SoSZTdEgN78DvpdA1BCM++O5kJ1hByJSYK2IQey0=) 2026-03-09 00:22:27.155527 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDahO141qtdT2zvhjE9SsJQ+TpdHdOUNwgoeoHKpMyrlN8XJiK0qY/D/UyICsbJqijMi3HI9cmN1d/Xg0kblG00=) 2026-03-09 00:22:27.155539 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEEJX1XocvZloOL7rYrZ9LX+2bgqWeJaH3bQmYoPPqjv) 2026-03-09 00:22:27.155550 | orchestrator | 2026-03-09 00:22:27.155562 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:22:27.155574 | orchestrator | Monday 09 March 2026 00:22:25 +0000 (0:00:00.939) 0:00:21.372 ********** 2026-03-09 00:22:27.155586 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtLyumXHFmKnnYhm8OZYXjq4opPs5avI2wK40aNTU/FZYr/Ekityj2y44JVCqNj1pih7BGs1v/75Ky4932rrRjzjVbdlpFctEOdZgNkCUNaUWt23o+bcBPqwzhmuDIhUBEg0nXg9XxJl27Io2WaXttGbrwpAjKdEvRMAQVIuRNCIUUi2iIUovwtS4zY1281rOPTuiFKoZgTWg5FKTDhUKsAz1NSiC6eVapoQat0v/hiC04JWRxgtY10Jicf3B65pTrmKRlR7q07h2awTe2mDGaKIwVjhy003cNPMcYDYpAVHYaNsgWtA5sAC1JDth+GBwHuaEwN9BYrEv3P/YHuCMUhCymMnuUm6xFfaqp+caWqN/fuLAehLbN+Zmq5e3UCXUrK6gqfBx+ldbY4axI3HhdQH97qYm3eEg61W0ASx4j4T46rnvHP3jX0Klm/w5GzJOXm0FazA+MIE0VMUvoN/CLBG1LcbZbzWEXxkHGDxse6USZg8YokCLVIDRvWYumInE=) 2026-03-09 00:22:27.155596 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLfUTiHh73mRVBnWc11pxfh5sdNihsAJdFzjXg8W+Ffp6YoKs0QvURd+jvOU5+1itIEWB7v5TVk/ok9AsywrlnE=) 2026-03-09 00:22:27.155606 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL1D1vgbXWZPfy3xnDFaLslZJGtrY4ietVqAy4asgnih) 2026-03-09 00:22:27.155616 | orchestrator | 2026-03-09 00:22:27.155625 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:22:27.155635 | orchestrator | Monday 09 March 2026 00:22:26 +0000 (0:00:01.088) 0:00:22.461 ********** 2026-03-09 00:22:27.155650 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAWamUs0tOfuq5LgSxMFL8m3BuLzy7vlPTv40rLqvjAPHGY3ZYRhHnYlL+1HCtUYqITQa602FVGxaepKek/QezI=) 2026-03-09 00:22:27.155740 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCe+L8hEdA8OOHEVPFvoYZkLdEAg0kpqsO8YAMj8EawVYXiCAOgn4hatqSgVu1+E/W0jXBE+VaBvafJe0CMf9b+GNb5RapyWm70OaFqE19tz4EBzHlYUzfnkW0Ai3vD+LbtZ8s70Uw3AohZ6G+0a3r5f5a23R9oqJG6r0g5sY3cKqSLWAfYtUK4uuFF9Nk2NrvThP+KiEtSn6wXooORWNdBk1Sfozr5F71WrnrijO6NlefE1THYVCV4FT9QS7LQQt7XeJYp14RFIcX8GxJE9bBcfgbp5yBsozecREIT29FbP968xoDz7pSmxVEBP1xyr1trnhPbG1R5LZZoMnlfr3GICRIi05ua5lc9XnH7FRWrTrtVJkQuQ00IO8R2GZZlVfno01kuGW294Hg4XYavhI8AUmujcDiOXt4Tj4FHTFshtih0YiUBG/2U4TWO+lt9boJhXceNEVB82c3xxmE5TB0anuwgW+Qh/wfSdwlzoAMQ95RQfyVB58aOoACuH0WOYgc=) 2026-03-09 00:22:32.053282 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGMUJ4y2yudSfyyvO1SoE3FDXvw+oOpa92WqPUbEYz3Z) 2026-03-09 00:22:32.053384 | orchestrator | 2026-03-09 00:22:32.053400 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:22:32.053412 | orchestrator | Monday 09 March 2026 00:22:27 +0000 (0:00:01.046) 0:00:23.507 ********** 2026-03-09 00:22:32.053443 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDGTDXAzSPDYIihc0x3DgolHA4aoKiK16BhyhklT5oT1SIkrAtwBK9e+3RctmMXQFiCbwadvFGLvOUx5ShDyXCW15QwbjfLVbadPl6su0JX+D18CL6TFkZtxz65zxbfTc+uA1D67EqHkFnU5ySwtwRxjcVGahLBiuz1+yIzBhwuF6J2ggfdrBwB1HUJYdBR+NvqoSBf0t3MrfscNTX4Novk/cDejiIACYTsWQstxZpKCN8uLfT1qgHDjzMxTquybceObE4LHYuRv1RClpYpeKNpBZYJ1aIvtlSH+ra1m1ynnorl3ksozWhAR1l/lImDBSaLitP+iQLNnEZI51sNGFiH0bvYFgxzMxXp2AwgFs3w2gQGxbQDXQcHoRXRHuWQIG2ymHb4G9KYCkCynrJQHbql3bD1XhhdlpZXPDpN/1+Q5o/IqyCggHrgpvgRiLpo+uv3NtcCucbf53zKqc3dCDPKb7h2gNl4fINCHOPPpP1ur12QimFVn7Q+yNnQzxPBZXs=) 2026-03-09 00:22:32.053482 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBARQ3Mld+uQKLqu9Msw8+q1a/vhEQqdv8jJ3riY0nsNkCen3HR3ei4Y4ufFPEHfJGXTxJJeIEroIDckW0IVus/c=) 2026-03-09 00:22:32.053496 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFONyN8xhmQyHID+dQNMCxU0kp+8mxOIL2T0B4fr6pOo) 2026-03-09 00:22:32.053507 | orchestrator | 2026-03-09 00:22:32.053518 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:22:32.053529 | orchestrator | Monday 09 March 2026 00:22:28 +0000 (0:00:01.137) 0:00:24.644 ********** 2026-03-09 00:22:32.053540 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEAwgFqFEne4JrJSLn2YDKoqaOdV+frk0PjK+eVnd79q) 2026-03-09 00:22:32.053552 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyWuNcHvhkdV6gXrlxSOfAq2+SqDhjRo33ZXUiyJ+Te3kSCPpiVQxOfSRyNrvGmtwSc+QKWU6PZ7BBaOPv6O1Cj20biuJggI4NVc/qc5D+aREFYzxPbf1C56rAC1O5ZT7je0Nad1Nnj3OAoTitHd3BeSk120pF4trFWkCj45DpYAqeW71KjnXtJw/8QEiuYFdRa+2Cihyqf+0Fb+DGTkhG9Y2pptVuv+yLK8RPpZw4Z4899WI3ywPy8ny26+IQ/dIJ7bGeETt44Zm/IXDFH3T2BdXCmrdH+CMd1ikH0RCt495E8eXphn54EGURv5qm+c3rVYK27RKrZjUINhLPW6W+hL41ElLImB+aZDGzeUuIG0Ikke/Q4vb3uQaAUIlAx4ptX2Ez6h9Z2sacMQJI/drMFcjek5Mdc+69jx07MkAMHEkGMhR09/gQdXQE7WqY0ITbb6OEOt53oAiJKrW0GEgWWFEql8Ax1aLNrrki1BZHoG0AhrZxjIvThQrIf92ysxc=) 2026-03-09 00:22:32.053564 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIp6xFDVzcyA+//S64g7Dv8qTwnqsHp4lJQRM8u5i793s2oSNcpX5hlVGr7xSpYQSFPF+bzkTCG4WbDsSA0FJiA=) 2026-03-09 00:22:32.053575 | orchestrator | 2026-03-09 00:22:32.053586 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:22:32.053597 | orchestrator | Monday 09 March 2026 00:22:29 +0000 (0:00:01.100) 0:00:25.744 ********** 2026-03-09 00:22:32.053608 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO4tpxqpoEbPGXoMpw5fssAOLoHcCv8QJj6abuntECoSGytuOd4eMeI+/eOX7B/EEbBDc7FfqdYe25buW6sxfVg=) 2026-03-09 00:22:32.053619 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDsr6HyY4jgTQTm5cgpj9F1RQQ7eL7BnKaIdGe7HFcrMpuEYwffnj3MWRsoH6sl2HRZNquy9Rsg5Sw6t6Be+nx5rmbI2F2wQX+0OeR84ut0NLwaTb7T6do6mkcCoZ7cnWy7cVa1h6DXDA+QabgZ6nS6KuIoimCIrrpU6xqL1DVptuVC5kam0CieBjKrAHD+ErhFGZYHFPcqaMjYVkCKjGyk4W3nn1/Ya2moRGMwZCpHosETSXiKYKFCjtjxDuWmu5S5hKOSZ38nz3WIAeHZKTM896ikj6wib7aClf9Zurm2bYXrKo8z5HvK9hxfBBBhTeI0ZIHHR6QrnZjOT6NiRCjqMb0MHBHSkCwcyuyEWFBbj0sdBDVlClB3mTMpy+H103g/YDNWUMiv6FHzzteVyQgunziX4jvtRaWaZrRnZ3N+vKELR9PKnV8Rs+ToatFXqHNAatMdBjBBh57rZBsnmgUKB0dHEUWywTJkVIPIO00CxGIGC7H2GknU3091+vmV/pM=) 2026-03-09 00:22:32.053631 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL1q3tGd8j2DXqrL1i+FB1PdoVTZTbaThgNIFeanKAd1) 2026-03-09 00:22:32.053642 | orchestrator | 2026-03-09 00:22:32.053653 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-03-09 00:22:32.053664 | orchestrator | Monday 09 March 2026 00:22:30 +0000 (0:00:01.115) 0:00:26.860 ********** 2026-03-09 00:22:32.053721 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-09 00:22:32.053734 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-09 00:22:32.053745 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-09 00:22:32.053756 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-09 00:22:32.053767 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-09 00:22:32.053794 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-09 00:22:32.053806 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-09 00:22:32.053826 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:22:32.053839 | orchestrator | 2026-03-09 00:22:32.053852 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-03-09 00:22:32.053864 | orchestrator | Monday 09 March 2026 00:22:31 +0000 (0:00:00.173) 0:00:27.034 ********** 2026-03-09 00:22:32.053877 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:22:32.053890 | orchestrator | 2026-03-09 00:22:32.053903 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-03-09 00:22:32.053916 | orchestrator | Monday 09 March 2026 00:22:31 +0000 (0:00:00.051) 0:00:27.085 ********** 2026-03-09 00:22:32.053928 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:22:32.053941 | orchestrator | 2026-03-09 00:22:32.053954 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-03-09 00:22:32.053967 | orchestrator | Monday 09 March 2026 00:22:31 +0000 (0:00:00.042) 0:00:27.128 ********** 2026-03-09 00:22:32.053981 | orchestrator | changed: [testbed-manager] 2026-03-09 00:22:32.053994 | orchestrator | 2026-03-09 00:22:32.054007 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:22:32.054076 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-09 00:22:32.054091 | orchestrator | 2026-03-09 00:22:32.054105 | orchestrator | 2026-03-09 00:22:32.054117 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:22:32.054131 | orchestrator | Monday 09 March 2026 00:22:31 +0000 (0:00:00.745) 0:00:27.873 ********** 2026-03-09 00:22:32.054144 | orchestrator | =============================================================================== 2026-03-09 00:22:32.054157 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.09s 2026-03-09 00:22:32.054170 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.21s 2026-03-09 00:22:32.054182 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2026-03-09 00:22:32.054193 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-03-09 00:22:32.054204 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-03-09 00:22:32.054215 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-03-09 00:22:32.054234 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-03-09 00:22:32.054245 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-03-09 00:22:32.054256 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-03-09 00:22:32.054267 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-03-09 00:22:32.054278 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-03-09 00:22:32.054289 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-03-09 00:22:32.054300 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-03-09 00:22:32.054311 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-03-09 00:22:32.054321 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2026-03-09 00:22:32.054332 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.94s 2026-03-09 00:22:32.054343 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.75s 2026-03-09 00:22:32.054354 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2026-03-09 00:22:32.054365 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2026-03-09 00:22:32.054376 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2026-03-09 00:22:32.352207 | orchestrator | + osism apply squid 2026-03-09 00:22:44.480286 | orchestrator | 2026-03-09 00:22:44 | INFO  | Prepare task for execution of squid. 2026-03-09 00:22:44.552335 | orchestrator | 2026-03-09 00:22:44 | INFO  | Task ea6ef0ac-741e-4307-9cc6-7b39b29ab0cc (squid) was prepared for execution. 2026-03-09 00:22:44.552428 | orchestrator | 2026-03-09 00:22:44 | INFO  | It takes a moment until task ea6ef0ac-741e-4307-9cc6-7b39b29ab0cc (squid) has been started and output is visible here. 2026-03-09 00:24:38.537417 | orchestrator | 2026-03-09 00:24:38.537544 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-03-09 00:24:38.537572 | orchestrator | 2026-03-09 00:24:38.537595 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-03-09 00:24:38.537615 | orchestrator | Monday 09 March 2026 00:22:48 +0000 (0:00:00.167) 0:00:00.167 ********** 2026-03-09 00:24:38.537636 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-03-09 00:24:38.537650 | orchestrator | 2026-03-09 00:24:38.537661 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-03-09 00:24:38.537672 | orchestrator | Monday 09 March 2026 00:22:48 +0000 (0:00:00.093) 0:00:00.261 ********** 2026-03-09 00:24:38.537683 | orchestrator | ok: [testbed-manager] 2026-03-09 00:24:38.537695 | orchestrator | 2026-03-09 00:24:38.537706 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-03-09 00:24:38.537717 | orchestrator | Monday 09 March 2026 00:22:50 +0000 (0:00:01.550) 0:00:01.811 ********** 2026-03-09 00:24:38.537804 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-03-09 00:24:38.537815 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-03-09 00:24:38.537826 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-03-09 00:24:38.537837 | orchestrator | 2026-03-09 00:24:38.537848 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-03-09 00:24:38.537859 | orchestrator | Monday 09 March 2026 00:22:51 +0000 (0:00:01.186) 0:00:02.998 ********** 2026-03-09 00:24:38.537870 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-03-09 00:24:38.537881 | orchestrator | 2026-03-09 00:24:38.537892 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-03-09 00:24:38.537903 | orchestrator | Monday 09 March 2026 00:22:52 +0000 (0:00:01.084) 0:00:04.083 ********** 2026-03-09 00:24:38.537914 | orchestrator | ok: [testbed-manager] 2026-03-09 00:24:38.537924 | orchestrator | 2026-03-09 00:24:38.537953 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-03-09 00:24:38.537967 | orchestrator | Monday 09 March 2026 00:22:53 +0000 (0:00:00.356) 0:00:04.439 ********** 2026-03-09 00:24:38.537980 | orchestrator | changed: [testbed-manager] 2026-03-09 00:24:38.537992 | orchestrator | 2026-03-09 00:24:38.538005 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-03-09 00:24:38.538078 | orchestrator | Monday 09 March 2026 00:22:53 +0000 (0:00:00.929) 0:00:05.368 ********** 2026-03-09 00:24:38.538093 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-03-09 00:24:38.538107 | orchestrator | ok: [testbed-manager] 2026-03-09 00:24:38.538119 | orchestrator | 2026-03-09 00:24:38.538131 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-03-09 00:24:38.538144 | orchestrator | Monday 09 March 2026 00:23:25 +0000 (0:00:31.438) 0:00:36.807 ********** 2026-03-09 00:24:38.538156 | orchestrator | changed: [testbed-manager] 2026-03-09 00:24:38.538169 | orchestrator | 2026-03-09 00:24:38.538182 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-03-09 00:24:38.538195 | orchestrator | Monday 09 March 2026 00:23:37 +0000 (0:00:12.016) 0:00:48.823 ********** 2026-03-09 00:24:38.538207 | orchestrator | Pausing for 60 seconds 2026-03-09 00:24:38.538222 | orchestrator | changed: [testbed-manager] 2026-03-09 00:24:38.538235 | orchestrator | 2026-03-09 00:24:38.538249 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-03-09 00:24:38.538288 | orchestrator | Monday 09 March 2026 00:24:37 +0000 (0:01:00.081) 0:01:48.904 ********** 2026-03-09 00:24:38.538301 | orchestrator | ok: [testbed-manager] 2026-03-09 00:24:38.538314 | orchestrator | 2026-03-09 00:24:38.538327 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-03-09 00:24:38.538341 | orchestrator | Monday 09 March 2026 00:24:37 +0000 (0:00:00.073) 0:01:48.977 ********** 2026-03-09 00:24:38.538353 | orchestrator | changed: [testbed-manager] 2026-03-09 00:24:38.538364 | orchestrator | 2026-03-09 00:24:38.538374 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:24:38.538386 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:24:38.538397 | orchestrator | 2026-03-09 00:24:38.538408 | orchestrator | 2026-03-09 00:24:38.538419 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:24:38.538429 | orchestrator | Monday 09 March 2026 00:24:38 +0000 (0:00:00.659) 0:01:49.637 ********** 2026-03-09 00:24:38.538440 | orchestrator | =============================================================================== 2026-03-09 00:24:38.538451 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2026-03-09 00:24:38.538461 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.44s 2026-03-09 00:24:38.538472 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.02s 2026-03-09 00:24:38.538482 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.55s 2026-03-09 00:24:38.538493 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.19s 2026-03-09 00:24:38.538503 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.08s 2026-03-09 00:24:38.538514 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.93s 2026-03-09 00:24:38.538525 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.66s 2026-03-09 00:24:38.538535 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.36s 2026-03-09 00:24:38.538546 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2026-03-09 00:24:38.538556 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-03-09 00:24:38.910859 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-09 00:24:38.910956 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-03-09 00:24:38.916875 | orchestrator | + set -e 2026-03-09 00:24:38.917225 | orchestrator | + NAMESPACE=kolla 2026-03-09 00:24:38.917252 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-09 00:24:38.921662 | orchestrator | ++ semver latest 9.0.0 2026-03-09 00:24:38.961167 | orchestrator | + [[ -1 -lt 0 ]] 2026-03-09 00:24:38.961261 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-09 00:24:38.961833 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-03-09 00:24:51.020790 | orchestrator | 2026-03-09 00:24:51 | INFO  | Prepare task for execution of operator. 2026-03-09 00:24:51.097317 | orchestrator | 2026-03-09 00:24:51 | INFO  | Task 6d0ba132-67d3-41f9-8883-28b004a63f4a (operator) was prepared for execution. 2026-03-09 00:24:51.097414 | orchestrator | 2026-03-09 00:24:51 | INFO  | It takes a moment until task 6d0ba132-67d3-41f9-8883-28b004a63f4a (operator) has been started and output is visible here. 2026-03-09 00:25:07.017970 | orchestrator | 2026-03-09 00:25:07.018157 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-03-09 00:25:07.018176 | orchestrator | 2026-03-09 00:25:07.018189 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-09 00:25:07.018201 | orchestrator | Monday 09 March 2026 00:24:55 +0000 (0:00:00.150) 0:00:00.150 ********** 2026-03-09 00:25:07.018213 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:25:07.018224 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:25:07.018991 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:25:07.019044 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:25:07.019056 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:25:07.019075 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:25:07.019092 | orchestrator | 2026-03-09 00:25:07.019111 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-03-09 00:25:07.019129 | orchestrator | Monday 09 March 2026 00:24:58 +0000 (0:00:03.255) 0:00:03.406 ********** 2026-03-09 00:25:07.019141 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:25:07.019151 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:25:07.019162 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:25:07.019173 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:25:07.019184 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:25:07.019194 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:25:07.019205 | orchestrator | 2026-03-09 00:25:07.019216 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-03-09 00:25:07.019227 | orchestrator | 2026-03-09 00:25:07.019254 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-09 00:25:07.019266 | orchestrator | Monday 09 March 2026 00:24:59 +0000 (0:00:00.745) 0:00:04.151 ********** 2026-03-09 00:25:07.019277 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:25:07.019288 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:25:07.019298 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:25:07.019309 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:25:07.019320 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:25:07.019330 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:25:07.019341 | orchestrator | 2026-03-09 00:25:07.019352 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-09 00:25:07.019363 | orchestrator | Monday 09 March 2026 00:24:59 +0000 (0:00:00.170) 0:00:04.322 ********** 2026-03-09 00:25:07.019373 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:25:07.019384 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:25:07.019395 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:25:07.019405 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:25:07.019416 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:25:07.019426 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:25:07.019437 | orchestrator | 2026-03-09 00:25:07.019448 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-09 00:25:07.019459 | orchestrator | Monday 09 March 2026 00:24:59 +0000 (0:00:00.198) 0:00:04.520 ********** 2026-03-09 00:25:07.019470 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:25:07.019481 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:25:07.019492 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:25:07.019502 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:25:07.019513 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:25:07.019524 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:25:07.019535 | orchestrator | 2026-03-09 00:25:07.019545 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-09 00:25:07.019556 | orchestrator | Monday 09 March 2026 00:25:00 +0000 (0:00:00.573) 0:00:05.094 ********** 2026-03-09 00:25:07.019567 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:25:07.019578 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:25:07.019589 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:25:07.019600 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:25:07.019610 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:25:07.019621 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:25:07.019632 | orchestrator | 2026-03-09 00:25:07.019643 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-09 00:25:07.019654 | orchestrator | Monday 09 March 2026 00:25:01 +0000 (0:00:00.794) 0:00:05.889 ********** 2026-03-09 00:25:07.019664 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-03-09 00:25:07.019676 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-03-09 00:25:07.019686 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-03-09 00:25:07.019697 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-03-09 00:25:07.019708 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-03-09 00:25:07.019764 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-03-09 00:25:07.019778 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-03-09 00:25:07.019789 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-03-09 00:25:07.019800 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-03-09 00:25:07.019810 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-03-09 00:25:07.019821 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-03-09 00:25:07.019832 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-03-09 00:25:07.019843 | orchestrator | 2026-03-09 00:25:07.019859 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-09 00:25:07.019877 | orchestrator | Monday 09 March 2026 00:25:02 +0000 (0:00:01.175) 0:00:07.065 ********** 2026-03-09 00:25:07.019896 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:25:07.019915 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:25:07.019934 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:25:07.019954 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:25:07.019965 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:25:07.019976 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:25:07.019987 | orchestrator | 2026-03-09 00:25:07.019998 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-09 00:25:07.020010 | orchestrator | Monday 09 March 2026 00:25:03 +0000 (0:00:01.202) 0:00:08.267 ********** 2026-03-09 00:25:07.020021 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-03-09 00:25:07.020032 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-03-09 00:25:07.020043 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-03-09 00:25:07.020054 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-03-09 00:25:07.020065 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-03-09 00:25:07.020096 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-03-09 00:25:07.020108 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-03-09 00:25:07.020121 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-03-09 00:25:07.020140 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-03-09 00:25:07.020160 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-03-09 00:25:07.020178 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-03-09 00:25:07.020196 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-03-09 00:25:07.020208 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-03-09 00:25:07.020219 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-03-09 00:25:07.020234 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-03-09 00:25:07.020260 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-03-09 00:25:07.020277 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-03-09 00:25:07.020294 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-03-09 00:25:07.020311 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-03-09 00:25:07.020329 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-03-09 00:25:07.020346 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-03-09 00:25:07.020363 | orchestrator | 2026-03-09 00:25:07.020382 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-09 00:25:07.020400 | orchestrator | Monday 09 March 2026 00:25:04 +0000 (0:00:01.154) 0:00:09.421 ********** 2026-03-09 00:25:07.020418 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:25:07.020438 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:25:07.020456 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:25:07.020483 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:25:07.020495 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:25:07.020505 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:25:07.020516 | orchestrator | 2026-03-09 00:25:07.020527 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-09 00:25:07.020538 | orchestrator | Monday 09 March 2026 00:25:04 +0000 (0:00:00.174) 0:00:09.596 ********** 2026-03-09 00:25:07.020549 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:25:07.020560 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:25:07.020570 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:25:07.020581 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:25:07.020592 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:25:07.020602 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:25:07.020613 | orchestrator | 2026-03-09 00:25:07.020624 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-09 00:25:07.020635 | orchestrator | Monday 09 March 2026 00:25:05 +0000 (0:00:00.203) 0:00:09.800 ********** 2026-03-09 00:25:07.020645 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:25:07.020656 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:25:07.020667 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:25:07.020677 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:25:07.020688 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:25:07.020699 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:25:07.020709 | orchestrator | 2026-03-09 00:25:07.020720 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-09 00:25:07.020765 | orchestrator | Monday 09 March 2026 00:25:05 +0000 (0:00:00.640) 0:00:10.440 ********** 2026-03-09 00:25:07.020779 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:25:07.020790 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:25:07.020801 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:25:07.020812 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:25:07.020822 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:25:07.020833 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:25:07.020844 | orchestrator | 2026-03-09 00:25:07.020855 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-09 00:25:07.020866 | orchestrator | Monday 09 March 2026 00:25:05 +0000 (0:00:00.210) 0:00:10.651 ********** 2026-03-09 00:25:07.020877 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-09 00:25:07.020888 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-09 00:25:07.020898 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:25:07.020909 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:25:07.020920 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-09 00:25:07.020931 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-09 00:25:07.020942 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:25:07.020952 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:25:07.020963 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-09 00:25:07.020974 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:25:07.020984 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-09 00:25:07.020995 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:25:07.021006 | orchestrator | 2026-03-09 00:25:07.021017 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-09 00:25:07.021028 | orchestrator | Monday 09 March 2026 00:25:06 +0000 (0:00:00.726) 0:00:11.377 ********** 2026-03-09 00:25:07.021039 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:25:07.021049 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:25:07.021060 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:25:07.021071 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:25:07.021081 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:25:07.021092 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:25:07.021103 | orchestrator | 2026-03-09 00:25:07.021114 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-09 00:25:07.021125 | orchestrator | Monday 09 March 2026 00:25:06 +0000 (0:00:00.197) 0:00:11.574 ********** 2026-03-09 00:25:07.021142 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:25:07.021153 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:25:07.021164 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:25:07.021175 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:25:07.021197 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:25:08.598072 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:25:08.598167 | orchestrator | 2026-03-09 00:25:08.598182 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-09 00:25:08.598195 | orchestrator | Monday 09 March 2026 00:25:07 +0000 (0:00:00.218) 0:00:11.793 ********** 2026-03-09 00:25:08.598205 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:25:08.598215 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:25:08.598224 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:25:08.598234 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:25:08.598244 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:25:08.598253 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:25:08.598263 | orchestrator | 2026-03-09 00:25:08.598273 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-09 00:25:08.598283 | orchestrator | Monday 09 March 2026 00:25:07 +0000 (0:00:00.207) 0:00:12.000 ********** 2026-03-09 00:25:08.598292 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:25:08.598302 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:25:08.598312 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:25:08.598321 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:25:08.598331 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:25:08.598341 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:25:08.598350 | orchestrator | 2026-03-09 00:25:08.598360 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-09 00:25:08.598388 | orchestrator | Monday 09 March 2026 00:25:07 +0000 (0:00:00.709) 0:00:12.710 ********** 2026-03-09 00:25:08.598399 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:25:08.598408 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:25:08.598418 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:25:08.598427 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:25:08.598437 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:25:08.598446 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:25:08.598456 | orchestrator | 2026-03-09 00:25:08.598466 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:25:08.598477 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 00:25:08.598488 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 00:25:08.598498 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 00:25:08.598508 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 00:25:08.598520 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 00:25:08.598533 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 00:25:08.598544 | orchestrator | 2026-03-09 00:25:08.598555 | orchestrator | 2026-03-09 00:25:08.598567 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:25:08.598579 | orchestrator | Monday 09 March 2026 00:25:08 +0000 (0:00:00.366) 0:00:13.076 ********** 2026-03-09 00:25:08.598590 | orchestrator | =============================================================================== 2026-03-09 00:25:08.598624 | orchestrator | Gathering Facts --------------------------------------------------------- 3.26s 2026-03-09 00:25:08.598636 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.20s 2026-03-09 00:25:08.598648 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.18s 2026-03-09 00:25:08.598660 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.15s 2026-03-09 00:25:08.598672 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.79s 2026-03-09 00:25:08.598683 | orchestrator | Do not require tty for all users ---------------------------------------- 0.75s 2026-03-09 00:25:08.598694 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.73s 2026-03-09 00:25:08.598707 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.71s 2026-03-09 00:25:08.598718 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.64s 2026-03-09 00:25:08.598758 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.57s 2026-03-09 00:25:08.598770 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.37s 2026-03-09 00:25:08.598781 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.22s 2026-03-09 00:25:08.598792 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.21s 2026-03-09 00:25:08.598804 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.21s 2026-03-09 00:25:08.598816 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.20s 2026-03-09 00:25:08.598829 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.20s 2026-03-09 00:25:08.598841 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.20s 2026-03-09 00:25:08.598852 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.17s 2026-03-09 00:25:08.598861 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.17s 2026-03-09 00:25:09.230182 | orchestrator | + osism apply --environment custom facts 2026-03-09 00:25:11.312300 | orchestrator | 2026-03-09 00:25:11 | INFO  | Trying to run play facts in environment custom 2026-03-09 00:25:21.410134 | orchestrator | 2026-03-09 00:25:21 | INFO  | Prepare task for execution of facts. 2026-03-09 00:25:21.490188 | orchestrator | 2026-03-09 00:25:21 | INFO  | Task aa758ac2-87c4-4d34-8560-14b768774419 (facts) was prepared for execution. 2026-03-09 00:25:21.490260 | orchestrator | 2026-03-09 00:25:21 | INFO  | It takes a moment until task aa758ac2-87c4-4d34-8560-14b768774419 (facts) has been started and output is visible here. 2026-03-09 00:26:04.168285 | orchestrator | 2026-03-09 00:26:04.168403 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-03-09 00:26:04.168421 | orchestrator | 2026-03-09 00:26:04.168449 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-09 00:26:04.168462 | orchestrator | Monday 09 March 2026 00:25:25 +0000 (0:00:00.087) 0:00:00.087 ********** 2026-03-09 00:26:04.168473 | orchestrator | ok: [testbed-manager] 2026-03-09 00:26:04.168485 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:26:04.168497 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:26:04.168508 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:26:04.168518 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:26:04.168529 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:26:04.168540 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:26:04.168551 | orchestrator | 2026-03-09 00:26:04.168562 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-03-09 00:26:04.168573 | orchestrator | Monday 09 March 2026 00:25:27 +0000 (0:00:01.351) 0:00:01.439 ********** 2026-03-09 00:26:04.168583 | orchestrator | ok: [testbed-manager] 2026-03-09 00:26:04.168595 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:26:04.168606 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:26:04.168640 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:26:04.168652 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:26:04.168662 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:26:04.168673 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:26:04.168684 | orchestrator | 2026-03-09 00:26:04.168695 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-03-09 00:26:04.168706 | orchestrator | 2026-03-09 00:26:04.168716 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-09 00:26:04.168727 | orchestrator | Monday 09 March 2026 00:25:28 +0000 (0:00:01.249) 0:00:02.689 ********** 2026-03-09 00:26:04.168783 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:26:04.168798 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:26:04.168809 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:26:04.168820 | orchestrator | 2026-03-09 00:26:04.168830 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-09 00:26:04.168842 | orchestrator | Monday 09 March 2026 00:25:28 +0000 (0:00:00.109) 0:00:02.798 ********** 2026-03-09 00:26:04.168853 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:26:04.168864 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:26:04.168875 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:26:04.168885 | orchestrator | 2026-03-09 00:26:04.168896 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-09 00:26:04.168907 | orchestrator | Monday 09 March 2026 00:25:28 +0000 (0:00:00.213) 0:00:03.011 ********** 2026-03-09 00:26:04.168918 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:26:04.168929 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:26:04.168940 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:26:04.168950 | orchestrator | 2026-03-09 00:26:04.168961 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-09 00:26:04.168973 | orchestrator | Monday 09 March 2026 00:25:28 +0000 (0:00:00.225) 0:00:03.237 ********** 2026-03-09 00:26:04.168985 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:26:04.168997 | orchestrator | 2026-03-09 00:26:04.169008 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-09 00:26:04.169019 | orchestrator | Monday 09 March 2026 00:25:29 +0000 (0:00:00.135) 0:00:03.373 ********** 2026-03-09 00:26:04.169030 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:26:04.169041 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:26:04.169051 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:26:04.169062 | orchestrator | 2026-03-09 00:26:04.169073 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-09 00:26:04.169084 | orchestrator | Monday 09 March 2026 00:25:29 +0000 (0:00:00.413) 0:00:03.786 ********** 2026-03-09 00:26:04.169095 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:26:04.169106 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:26:04.169117 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:26:04.169128 | orchestrator | 2026-03-09 00:26:04.169138 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-09 00:26:04.169150 | orchestrator | Monday 09 March 2026 00:25:29 +0000 (0:00:00.156) 0:00:03.942 ********** 2026-03-09 00:26:04.169160 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:26:04.169171 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:26:04.169182 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:26:04.169193 | orchestrator | 2026-03-09 00:26:04.169203 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-09 00:26:04.169214 | orchestrator | Monday 09 March 2026 00:25:30 +0000 (0:00:01.055) 0:00:04.998 ********** 2026-03-09 00:26:04.169225 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:26:04.169236 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:26:04.169247 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:26:04.169258 | orchestrator | 2026-03-09 00:26:04.169269 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-09 00:26:04.169288 | orchestrator | Monday 09 March 2026 00:25:31 +0000 (0:00:00.475) 0:00:05.473 ********** 2026-03-09 00:26:04.169299 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:26:04.169309 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:26:04.169320 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:26:04.169331 | orchestrator | 2026-03-09 00:26:04.169342 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-09 00:26:04.169352 | orchestrator | Monday 09 March 2026 00:25:32 +0000 (0:00:01.092) 0:00:06.566 ********** 2026-03-09 00:26:04.169363 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:26:04.169374 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:26:04.169384 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:26:04.169395 | orchestrator | 2026-03-09 00:26:04.169406 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-03-09 00:26:04.169417 | orchestrator | Monday 09 March 2026 00:25:47 +0000 (0:00:15.655) 0:00:22.221 ********** 2026-03-09 00:26:04.169427 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:26:04.169438 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:26:04.169449 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:26:04.169460 | orchestrator | 2026-03-09 00:26:04.169470 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-03-09 00:26:04.169499 | orchestrator | Monday 09 March 2026 00:25:48 +0000 (0:00:00.098) 0:00:22.319 ********** 2026-03-09 00:26:04.169511 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:26:04.169522 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:26:04.169533 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:26:04.169544 | orchestrator | 2026-03-09 00:26:04.169555 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-09 00:26:04.169565 | orchestrator | Monday 09 March 2026 00:25:55 +0000 (0:00:07.379) 0:00:29.698 ********** 2026-03-09 00:26:04.169576 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:26:04.169587 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:26:04.169598 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:26:04.169608 | orchestrator | 2026-03-09 00:26:04.169619 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-09 00:26:04.169630 | orchestrator | Monday 09 March 2026 00:25:55 +0000 (0:00:00.453) 0:00:30.152 ********** 2026-03-09 00:26:04.169641 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-03-09 00:26:04.169652 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-03-09 00:26:04.169663 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-03-09 00:26:04.169674 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-03-09 00:26:04.169684 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-03-09 00:26:04.169695 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-03-09 00:26:04.169706 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-03-09 00:26:04.169716 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-03-09 00:26:04.169727 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-03-09 00:26:04.169760 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-03-09 00:26:04.169778 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-03-09 00:26:04.169875 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-03-09 00:26:04.169892 | orchestrator | 2026-03-09 00:26:04.169903 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-09 00:26:04.169914 | orchestrator | Monday 09 March 2026 00:25:59 +0000 (0:00:03.241) 0:00:33.393 ********** 2026-03-09 00:26:04.169924 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:26:04.169935 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:26:04.169946 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:26:04.169957 | orchestrator | 2026-03-09 00:26:04.169968 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-09 00:26:04.169989 | orchestrator | 2026-03-09 00:26:04.170000 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-09 00:26:04.170011 | orchestrator | Monday 09 March 2026 00:26:00 +0000 (0:00:01.140) 0:00:34.533 ********** 2026-03-09 00:26:04.170076 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:26:04.170087 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:26:04.170098 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:26:04.170108 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:26:04.170119 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:26:04.170130 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:26:04.170140 | orchestrator | ok: [testbed-manager] 2026-03-09 00:26:04.170160 | orchestrator | 2026-03-09 00:26:04.170171 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:26:04.170183 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:26:04.170195 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:26:04.170206 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:26:04.170217 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:26:04.170228 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:26:04.170240 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:26:04.170250 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:26:04.170261 | orchestrator | 2026-03-09 00:26:04.170272 | orchestrator | 2026-03-09 00:26:04.170283 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:26:04.170294 | orchestrator | Monday 09 March 2026 00:26:04 +0000 (0:00:03.846) 0:00:38.380 ********** 2026-03-09 00:26:04.170305 | orchestrator | =============================================================================== 2026-03-09 00:26:04.170316 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.66s 2026-03-09 00:26:04.170327 | orchestrator | Install required packages (Debian) -------------------------------------- 7.38s 2026-03-09 00:26:04.170337 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.85s 2026-03-09 00:26:04.170348 | orchestrator | Copy fact files --------------------------------------------------------- 3.24s 2026-03-09 00:26:04.170359 | orchestrator | Create custom facts directory ------------------------------------------- 1.35s 2026-03-09 00:26:04.170369 | orchestrator | Copy fact file ---------------------------------------------------------- 1.25s 2026-03-09 00:26:04.170390 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.14s 2026-03-09 00:26:04.401507 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.09s 2026-03-09 00:26:04.401628 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.06s 2026-03-09 00:26:04.401645 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.48s 2026-03-09 00:26:04.401657 | orchestrator | Create custom facts directory ------------------------------------------- 0.45s 2026-03-09 00:26:04.401668 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.41s 2026-03-09 00:26:04.401679 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.23s 2026-03-09 00:26:04.401690 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.21s 2026-03-09 00:26:04.401702 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.16s 2026-03-09 00:26:04.401735 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.14s 2026-03-09 00:26:04.401810 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2026-03-09 00:26:04.401823 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2026-03-09 00:26:04.758935 | orchestrator | + osism apply bootstrap 2026-03-09 00:26:16.982459 | orchestrator | 2026-03-09 00:26:16 | INFO  | Prepare task for execution of bootstrap. 2026-03-09 00:26:17.058598 | orchestrator | 2026-03-09 00:26:17 | INFO  | Task f2b7f90f-8c3b-4ec8-a1c4-9f6553690d76 (bootstrap) was prepared for execution. 2026-03-09 00:26:17.058714 | orchestrator | 2026-03-09 00:26:17 | INFO  | It takes a moment until task f2b7f90f-8c3b-4ec8-a1c4-9f6553690d76 (bootstrap) has been started and output is visible here. 2026-03-09 00:26:33.438374 | orchestrator | 2026-03-09 00:26:33.438509 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-03-09 00:26:33.438526 | orchestrator | 2026-03-09 00:26:33.438537 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-03-09 00:26:33.438547 | orchestrator | Monday 09 March 2026 00:26:21 +0000 (0:00:00.156) 0:00:00.156 ********** 2026-03-09 00:26:33.438557 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:26:33.438568 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:26:33.438578 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:26:33.438587 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:26:33.438597 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:26:33.438606 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:26:33.438616 | orchestrator | ok: [testbed-manager] 2026-03-09 00:26:33.438625 | orchestrator | 2026-03-09 00:26:33.438635 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-09 00:26:33.438644 | orchestrator | 2026-03-09 00:26:33.438655 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-09 00:26:33.438665 | orchestrator | Monday 09 March 2026 00:26:21 +0000 (0:00:00.256) 0:00:00.413 ********** 2026-03-09 00:26:33.438675 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:26:33.438685 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:26:33.438694 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:26:33.438704 | orchestrator | ok: [testbed-manager] 2026-03-09 00:26:33.438713 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:26:33.438723 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:26:33.438732 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:26:33.438799 | orchestrator | 2026-03-09 00:26:33.438811 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-03-09 00:26:33.438821 | orchestrator | 2026-03-09 00:26:33.438830 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-09 00:26:33.438840 | orchestrator | Monday 09 March 2026 00:26:25 +0000 (0:00:03.600) 0:00:04.013 ********** 2026-03-09 00:26:33.438851 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 00:26:33.438861 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 00:26:33.438870 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-09 00:26:33.438880 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-09 00:26:33.438890 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 00:26:33.438901 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-09 00:26:33.438912 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-09 00:26:33.438924 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-09 00:26:33.438935 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-09 00:26:33.438946 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-09 00:26:33.438957 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-09 00:26:33.438968 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-09 00:26:33.438980 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-09 00:26:33.439016 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-09 00:26:33.439027 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-09 00:26:33.439038 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-09 00:26:33.439049 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-09 00:26:33.439062 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-03-09 00:26:33.439074 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-09 00:26:33.439085 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-09 00:26:33.439096 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-09 00:26:33.439107 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-03-09 00:26:33.439118 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:26:33.439129 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-09 00:26:33.439140 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-09 00:26:33.439151 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-09 00:26:33.439163 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-09 00:26:33.439174 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:26:33.439185 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-09 00:26:33.439196 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-09 00:26:33.439207 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-09 00:26:33.439218 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-09 00:26:33.439229 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-09 00:26:33.439240 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-09 00:26:33.439251 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-03-09 00:26:33.439261 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:26:33.439271 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-09 00:26:33.439280 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-09 00:26:33.439290 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-03-09 00:26:33.439299 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:26:33.439309 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-09 00:26:33.439319 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-09 00:26:33.439328 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-09 00:26:33.439338 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-09 00:26:33.439347 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-09 00:26:33.439357 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-09 00:26:33.439367 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-09 00:26:33.439393 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-09 00:26:33.439403 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-03-09 00:26:33.439413 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:26:33.439422 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-09 00:26:33.439432 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-03-09 00:26:33.439441 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:26:33.439451 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-09 00:26:33.439460 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-09 00:26:33.439469 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:26:33.439479 | orchestrator | 2026-03-09 00:26:33.439488 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-03-09 00:26:33.439498 | orchestrator | 2026-03-09 00:26:33.439507 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-03-09 00:26:33.439524 | orchestrator | Monday 09 March 2026 00:26:25 +0000 (0:00:00.545) 0:00:04.559 ********** 2026-03-09 00:26:33.439534 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:26:33.439543 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:26:33.439553 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:26:33.439562 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:26:33.439572 | orchestrator | ok: [testbed-manager] 2026-03-09 00:26:33.439581 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:26:33.439590 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:26:33.439600 | orchestrator | 2026-03-09 00:26:33.439609 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-03-09 00:26:33.439619 | orchestrator | Monday 09 March 2026 00:26:27 +0000 (0:00:01.223) 0:00:05.782 ********** 2026-03-09 00:26:33.439628 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:26:33.439637 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:26:33.439647 | orchestrator | ok: [testbed-manager] 2026-03-09 00:26:33.439656 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:26:33.439666 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:26:33.439675 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:26:33.439684 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:26:33.439693 | orchestrator | 2026-03-09 00:26:33.439703 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-03-09 00:26:33.439712 | orchestrator | Monday 09 March 2026 00:26:28 +0000 (0:00:01.309) 0:00:07.092 ********** 2026-03-09 00:26:33.439723 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-03-09 00:26:33.439735 | orchestrator | 2026-03-09 00:26:33.439765 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-03-09 00:26:33.439774 | orchestrator | Monday 09 March 2026 00:26:28 +0000 (0:00:00.297) 0:00:07.389 ********** 2026-03-09 00:26:33.439784 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:26:33.439793 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:26:33.439803 | orchestrator | changed: [testbed-manager] 2026-03-09 00:26:33.439813 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:26:33.439822 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:26:33.439832 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:26:33.439841 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:26:33.439851 | orchestrator | 2026-03-09 00:26:33.439878 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-03-09 00:26:33.439888 | orchestrator | Monday 09 March 2026 00:26:30 +0000 (0:00:02.029) 0:00:09.419 ********** 2026-03-09 00:26:33.439898 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:26:33.439909 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:26:33.439920 | orchestrator | 2026-03-09 00:26:33.439930 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-03-09 00:26:33.439940 | orchestrator | Monday 09 March 2026 00:26:31 +0000 (0:00:00.291) 0:00:09.710 ********** 2026-03-09 00:26:33.439949 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:26:33.439959 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:26:33.439974 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:26:33.439984 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:26:33.439993 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:26:33.440003 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:26:33.440012 | orchestrator | 2026-03-09 00:26:33.440022 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-03-09 00:26:33.440032 | orchestrator | Monday 09 March 2026 00:26:32 +0000 (0:00:01.066) 0:00:10.776 ********** 2026-03-09 00:26:33.440042 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:26:33.440052 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:26:33.440080 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:26:33.440097 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:26:33.440114 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:26:33.440130 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:26:33.440145 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:26:33.440160 | orchestrator | 2026-03-09 00:26:33.440175 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-03-09 00:26:33.440190 | orchestrator | Monday 09 March 2026 00:26:32 +0000 (0:00:00.566) 0:00:11.343 ********** 2026-03-09 00:26:33.440205 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:26:33.440219 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:26:33.440235 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:26:33.440252 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:26:33.440268 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:26:33.440285 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:26:33.440301 | orchestrator | ok: [testbed-manager] 2026-03-09 00:26:33.440317 | orchestrator | 2026-03-09 00:26:33.440334 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-09 00:26:33.440351 | orchestrator | Monday 09 March 2026 00:26:33 +0000 (0:00:00.651) 0:00:11.995 ********** 2026-03-09 00:26:33.440368 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:26:33.440384 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:26:33.440414 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:26:45.712810 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:26:45.712929 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:26:45.712953 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:26:45.712974 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:26:45.712993 | orchestrator | 2026-03-09 00:26:45.713011 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-09 00:26:45.713024 | orchestrator | Monday 09 March 2026 00:26:33 +0000 (0:00:00.230) 0:00:12.225 ********** 2026-03-09 00:26:45.713037 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-03-09 00:26:45.713051 | orchestrator | 2026-03-09 00:26:45.713062 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-09 00:26:45.713074 | orchestrator | Monday 09 March 2026 00:26:33 +0000 (0:00:00.354) 0:00:12.579 ********** 2026-03-09 00:26:45.713085 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-03-09 00:26:45.713096 | orchestrator | 2026-03-09 00:26:45.713107 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-09 00:26:45.713117 | orchestrator | Monday 09 March 2026 00:26:34 +0000 (0:00:00.373) 0:00:12.953 ********** 2026-03-09 00:26:45.713128 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:26:45.713140 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:26:45.713150 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:26:45.713161 | orchestrator | ok: [testbed-manager] 2026-03-09 00:26:45.713171 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:26:45.713182 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:26:45.713192 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:26:45.713203 | orchestrator | 2026-03-09 00:26:45.713214 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-09 00:26:45.713225 | orchestrator | Monday 09 March 2026 00:26:35 +0000 (0:00:01.328) 0:00:14.282 ********** 2026-03-09 00:26:45.713236 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:26:45.713247 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:26:45.713258 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:26:45.713268 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:26:45.713279 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:26:45.713318 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:26:45.713331 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:26:45.713343 | orchestrator | 2026-03-09 00:26:45.713356 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-09 00:26:45.713369 | orchestrator | Monday 09 March 2026 00:26:35 +0000 (0:00:00.256) 0:00:14.539 ********** 2026-03-09 00:26:45.713382 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:26:45.713395 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:26:45.713407 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:26:45.713419 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:26:45.713430 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:26:45.713440 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:26:45.713451 | orchestrator | ok: [testbed-manager] 2026-03-09 00:26:45.713461 | orchestrator | 2026-03-09 00:26:45.713472 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-09 00:26:45.713482 | orchestrator | Monday 09 March 2026 00:26:36 +0000 (0:00:00.551) 0:00:15.090 ********** 2026-03-09 00:26:45.713493 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:26:45.713504 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:26:45.713515 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:26:45.713525 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:26:45.713535 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:26:45.713546 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:26:45.713556 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:26:45.713567 | orchestrator | 2026-03-09 00:26:45.713578 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-09 00:26:45.713614 | orchestrator | Monday 09 March 2026 00:26:36 +0000 (0:00:00.253) 0:00:15.344 ********** 2026-03-09 00:26:45.713667 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:26:45.713679 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:26:45.713690 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:26:45.713701 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:26:45.713723 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:26:45.713734 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:26:45.713768 | orchestrator | ok: [testbed-manager] 2026-03-09 00:26:45.713780 | orchestrator | 2026-03-09 00:26:45.713791 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-09 00:26:45.713802 | orchestrator | Monday 09 March 2026 00:26:37 +0000 (0:00:00.541) 0:00:15.885 ********** 2026-03-09 00:26:45.713812 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:26:45.713823 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:26:45.713833 | orchestrator | ok: [testbed-manager] 2026-03-09 00:26:45.713844 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:26:45.713855 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:26:45.713865 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:26:45.713876 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:26:45.713886 | orchestrator | 2026-03-09 00:26:45.713897 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-09 00:26:45.713909 | orchestrator | Monday 09 March 2026 00:26:38 +0000 (0:00:01.101) 0:00:16.987 ********** 2026-03-09 00:26:45.713919 | orchestrator | ok: [testbed-manager] 2026-03-09 00:26:45.713930 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:26:45.713941 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:26:45.713951 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:26:45.713962 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:26:45.713980 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:26:45.713999 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:26:45.714096 | orchestrator | 2026-03-09 00:26:45.714116 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-09 00:26:45.714128 | orchestrator | Monday 09 March 2026 00:26:39 +0000 (0:00:01.119) 0:00:18.106 ********** 2026-03-09 00:26:45.714160 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-03-09 00:26:45.714183 | orchestrator | 2026-03-09 00:26:45.714195 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-09 00:26:45.714206 | orchestrator | Monday 09 March 2026 00:26:39 +0000 (0:00:00.344) 0:00:18.451 ********** 2026-03-09 00:26:45.714217 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:26:45.714228 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:26:45.714239 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:26:45.714249 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:26:45.714260 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:26:45.714271 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:26:45.714281 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:26:45.714292 | orchestrator | 2026-03-09 00:26:45.714303 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-09 00:26:45.714314 | orchestrator | Monday 09 March 2026 00:26:41 +0000 (0:00:01.377) 0:00:19.828 ********** 2026-03-09 00:26:45.714324 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:26:45.714335 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:26:45.714346 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:26:45.714356 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:26:45.714367 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:26:45.714378 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:26:45.714389 | orchestrator | ok: [testbed-manager] 2026-03-09 00:26:45.714399 | orchestrator | 2026-03-09 00:26:45.714410 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-09 00:26:45.714421 | orchestrator | Monday 09 March 2026 00:26:41 +0000 (0:00:00.236) 0:00:20.065 ********** 2026-03-09 00:26:45.714432 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:26:45.714443 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:26:45.714453 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:26:45.714464 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:26:45.714474 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:26:45.714485 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:26:45.714495 | orchestrator | ok: [testbed-manager] 2026-03-09 00:26:45.714506 | orchestrator | 2026-03-09 00:26:45.714517 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-09 00:26:45.714528 | orchestrator | Monday 09 March 2026 00:26:41 +0000 (0:00:00.250) 0:00:20.316 ********** 2026-03-09 00:26:45.714539 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:26:45.714549 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:26:45.714560 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:26:45.714570 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:26:45.714581 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:26:45.714591 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:26:45.714608 | orchestrator | ok: [testbed-manager] 2026-03-09 00:26:45.714626 | orchestrator | 2026-03-09 00:26:45.714645 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-09 00:26:45.714665 | orchestrator | Monday 09 March 2026 00:26:41 +0000 (0:00:00.236) 0:00:20.553 ********** 2026-03-09 00:26:45.714685 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-03-09 00:26:45.714704 | orchestrator | 2026-03-09 00:26:45.714719 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-09 00:26:45.714730 | orchestrator | Monday 09 March 2026 00:26:42 +0000 (0:00:00.310) 0:00:20.864 ********** 2026-03-09 00:26:45.714741 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:26:45.714793 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:26:45.714804 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:26:45.714815 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:26:45.714826 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:26:45.714836 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:26:45.714847 | orchestrator | ok: [testbed-manager] 2026-03-09 00:26:45.714858 | orchestrator | 2026-03-09 00:26:45.714877 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-09 00:26:45.714888 | orchestrator | Monday 09 March 2026 00:26:42 +0000 (0:00:00.592) 0:00:21.456 ********** 2026-03-09 00:26:45.714899 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:26:45.714910 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:26:45.714921 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:26:45.714932 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:26:45.714943 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:26:45.714954 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:26:45.714965 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:26:45.714975 | orchestrator | 2026-03-09 00:26:45.714987 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-09 00:26:45.714997 | orchestrator | Monday 09 March 2026 00:26:43 +0000 (0:00:00.254) 0:00:21.711 ********** 2026-03-09 00:26:45.715008 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:26:45.715019 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:26:45.715030 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:26:45.715041 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:26:45.715052 | orchestrator | ok: [testbed-manager] 2026-03-09 00:26:45.715063 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:26:45.715074 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:26:45.715085 | orchestrator | 2026-03-09 00:26:45.715095 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-09 00:26:45.715106 | orchestrator | Monday 09 March 2026 00:26:44 +0000 (0:00:01.064) 0:00:22.775 ********** 2026-03-09 00:26:45.715117 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:26:45.715128 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:26:45.715139 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:26:45.715150 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:26:45.715160 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:26:45.715171 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:26:45.715182 | orchestrator | ok: [testbed-manager] 2026-03-09 00:26:45.715193 | orchestrator | 2026-03-09 00:26:45.715204 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-09 00:26:45.715215 | orchestrator | Monday 09 March 2026 00:26:44 +0000 (0:00:00.627) 0:00:23.403 ********** 2026-03-09 00:26:45.715225 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:26:45.715236 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:26:45.715247 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:26:45.715258 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:26:45.715277 | orchestrator | ok: [testbed-manager] 2026-03-09 00:27:29.494754 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:27:29.494924 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:27:29.494948 | orchestrator | 2026-03-09 00:27:29.494960 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-09 00:27:29.494970 | orchestrator | Monday 09 March 2026 00:26:45 +0000 (0:00:01.256) 0:00:24.659 ********** 2026-03-09 00:27:29.494979 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:27:29.494988 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:27:29.494996 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:27:29.495004 | orchestrator | changed: [testbed-manager] 2026-03-09 00:27:29.495012 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:27:29.495020 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:27:29.495028 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:27:29.495037 | orchestrator | 2026-03-09 00:27:29.495045 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-03-09 00:27:29.495053 | orchestrator | Monday 09 March 2026 00:27:01 +0000 (0:00:15.966) 0:00:40.626 ********** 2026-03-09 00:27:29.495061 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:27:29.495069 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:27:29.495077 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:27:29.495085 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:27:29.495093 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:27:29.495101 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:27:29.495109 | orchestrator | ok: [testbed-manager] 2026-03-09 00:27:29.495144 | orchestrator | 2026-03-09 00:27:29.495159 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-03-09 00:27:29.495174 | orchestrator | Monday 09 March 2026 00:27:02 +0000 (0:00:00.235) 0:00:40.862 ********** 2026-03-09 00:27:29.495188 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:27:29.495202 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:27:29.495214 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:27:29.495228 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:27:29.495236 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:27:29.495244 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:27:29.495251 | orchestrator | ok: [testbed-manager] 2026-03-09 00:27:29.495259 | orchestrator | 2026-03-09 00:27:29.495267 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-03-09 00:27:29.495280 | orchestrator | Monday 09 March 2026 00:27:02 +0000 (0:00:00.250) 0:00:41.112 ********** 2026-03-09 00:27:29.495294 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:27:29.495308 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:27:29.495323 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:27:29.495338 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:27:29.495373 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:27:29.495383 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:27:29.495393 | orchestrator | ok: [testbed-manager] 2026-03-09 00:27:29.495441 | orchestrator | 2026-03-09 00:27:29.495457 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-03-09 00:27:29.495472 | orchestrator | Monday 09 March 2026 00:27:02 +0000 (0:00:00.240) 0:00:41.352 ********** 2026-03-09 00:27:29.495489 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-03-09 00:27:29.495506 | orchestrator | 2026-03-09 00:27:29.495520 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-03-09 00:27:29.495632 | orchestrator | Monday 09 March 2026 00:27:02 +0000 (0:00:00.302) 0:00:41.654 ********** 2026-03-09 00:27:29.495651 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:27:29.495664 | orchestrator | ok: [testbed-manager] 2026-03-09 00:27:29.495678 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:27:29.495686 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:27:29.495694 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:27:29.495702 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:27:29.495710 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:27:29.495722 | orchestrator | 2026-03-09 00:27:29.495766 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-03-09 00:27:29.495821 | orchestrator | Monday 09 March 2026 00:27:04 +0000 (0:00:01.891) 0:00:43.546 ********** 2026-03-09 00:27:29.495830 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:27:29.495838 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:27:29.495846 | orchestrator | changed: [testbed-manager] 2026-03-09 00:27:29.495854 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:27:29.495869 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:27:29.495878 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:27:29.495885 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:27:29.495894 | orchestrator | 2026-03-09 00:27:29.495907 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-03-09 00:27:29.495921 | orchestrator | Monday 09 March 2026 00:27:05 +0000 (0:00:01.118) 0:00:44.665 ********** 2026-03-09 00:27:29.495935 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:27:29.495949 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:27:29.495964 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:27:29.495977 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:27:29.495990 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:27:29.496004 | orchestrator | ok: [testbed-manager] 2026-03-09 00:27:29.496013 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:27:29.496027 | orchestrator | 2026-03-09 00:27:29.496041 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-03-09 00:27:29.496070 | orchestrator | Monday 09 March 2026 00:27:07 +0000 (0:00:01.830) 0:00:46.495 ********** 2026-03-09 00:27:29.496086 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-03-09 00:27:29.496101 | orchestrator | 2026-03-09 00:27:29.496113 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-03-09 00:27:29.496126 | orchestrator | Monday 09 March 2026 00:27:08 +0000 (0:00:00.345) 0:00:46.840 ********** 2026-03-09 00:27:29.496139 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:27:29.496153 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:27:29.496167 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:27:29.496182 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:27:29.496195 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:27:29.496208 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:27:29.496219 | orchestrator | changed: [testbed-manager] 2026-03-09 00:27:29.496232 | orchestrator | 2026-03-09 00:27:29.496268 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-03-09 00:27:29.496284 | orchestrator | Monday 09 March 2026 00:27:09 +0000 (0:00:01.088) 0:00:47.929 ********** 2026-03-09 00:27:29.496297 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:27:29.496311 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:27:29.496319 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:27:29.496332 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:27:29.496346 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:27:29.496360 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:27:29.496374 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:27:29.496388 | orchestrator | 2026-03-09 00:27:29.496401 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-03-09 00:27:29.496413 | orchestrator | Monday 09 March 2026 00:27:09 +0000 (0:00:00.255) 0:00:48.184 ********** 2026-03-09 00:27:29.496426 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-03-09 00:27:29.496440 | orchestrator | 2026-03-09 00:27:29.496454 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-03-09 00:27:29.496467 | orchestrator | Monday 09 March 2026 00:27:09 +0000 (0:00:00.312) 0:00:48.497 ********** 2026-03-09 00:27:29.496482 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:27:29.496495 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:27:29.496507 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:27:29.496520 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:27:29.496535 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:27:29.496547 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:27:29.496562 | orchestrator | ok: [testbed-manager] 2026-03-09 00:27:29.496576 | orchestrator | 2026-03-09 00:27:29.496589 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-03-09 00:27:29.496602 | orchestrator | Monday 09 March 2026 00:27:11 +0000 (0:00:01.591) 0:00:50.089 ********** 2026-03-09 00:27:29.496615 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:27:29.496626 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:27:29.496634 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:27:29.496642 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:27:29.496649 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:27:29.496657 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:27:29.496665 | orchestrator | changed: [testbed-manager] 2026-03-09 00:27:29.496672 | orchestrator | 2026-03-09 00:27:29.496686 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-03-09 00:27:29.496699 | orchestrator | Monday 09 March 2026 00:27:12 +0000 (0:00:01.187) 0:00:51.276 ********** 2026-03-09 00:27:29.496713 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:27:29.496727 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:27:29.496750 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:27:29.496763 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:27:29.496774 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:27:29.496800 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:27:29.496815 | orchestrator | changed: [testbed-manager] 2026-03-09 00:27:29.496829 | orchestrator | 2026-03-09 00:27:29.496843 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-03-09 00:27:29.496858 | orchestrator | Monday 09 March 2026 00:27:26 +0000 (0:00:13.861) 0:01:05.137 ********** 2026-03-09 00:27:29.496871 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:27:29.496884 | orchestrator | ok: [testbed-manager] 2026-03-09 00:27:29.496896 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:27:29.496904 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:27:29.496911 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:27:29.496923 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:27:29.496937 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:27:29.496950 | orchestrator | 2026-03-09 00:27:29.496965 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-03-09 00:27:29.496979 | orchestrator | Monday 09 March 2026 00:27:27 +0000 (0:00:01.198) 0:01:06.336 ********** 2026-03-09 00:27:29.496993 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:27:29.497003 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:27:29.497011 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:27:29.497018 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:27:29.497026 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:27:29.497033 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:27:29.497048 | orchestrator | ok: [testbed-manager] 2026-03-09 00:27:29.497062 | orchestrator | 2026-03-09 00:27:29.497076 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-03-09 00:27:29.497090 | orchestrator | Monday 09 March 2026 00:27:28 +0000 (0:00:00.948) 0:01:07.285 ********** 2026-03-09 00:27:29.497105 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:27:29.497115 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:27:29.497123 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:27:29.497131 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:27:29.497138 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:27:29.497146 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:27:29.497160 | orchestrator | ok: [testbed-manager] 2026-03-09 00:27:29.497174 | orchestrator | 2026-03-09 00:27:29.497188 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-03-09 00:27:29.497203 | orchestrator | Monday 09 March 2026 00:27:28 +0000 (0:00:00.257) 0:01:07.542 ********** 2026-03-09 00:27:29.497217 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:27:29.497231 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:27:29.497241 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:27:29.497250 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:27:29.497263 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:27:29.497277 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:27:29.497291 | orchestrator | ok: [testbed-manager] 2026-03-09 00:27:29.497305 | orchestrator | 2026-03-09 00:27:29.497317 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-03-09 00:27:29.497325 | orchestrator | Monday 09 March 2026 00:27:29 +0000 (0:00:00.280) 0:01:07.822 ********** 2026-03-09 00:27:29.497334 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-03-09 00:27:29.497342 | orchestrator | 2026-03-09 00:27:29.497357 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-03-09 00:29:56.609655 | orchestrator | Monday 09 March 2026 00:27:29 +0000 (0:00:00.338) 0:01:08.161 ********** 2026-03-09 00:29:56.609772 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:29:56.609786 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:29:56.609795 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:29:56.609803 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:29:56.609837 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:29:56.609851 | orchestrator | ok: [testbed-manager] 2026-03-09 00:29:56.609864 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:29:56.609878 | orchestrator | 2026-03-09 00:29:56.609892 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-03-09 00:29:56.609959 | orchestrator | Monday 09 March 2026 00:27:31 +0000 (0:00:01.795) 0:01:09.957 ********** 2026-03-09 00:29:56.609976 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:29:56.609991 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:29:56.610000 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:29:56.610008 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:29:56.610082 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:29:56.610091 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:29:56.610100 | orchestrator | changed: [testbed-manager] 2026-03-09 00:29:56.610107 | orchestrator | 2026-03-09 00:29:56.610116 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-03-09 00:29:56.610125 | orchestrator | Monday 09 March 2026 00:27:31 +0000 (0:00:00.610) 0:01:10.568 ********** 2026-03-09 00:29:56.610133 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:29:56.610140 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:29:56.610149 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:29:56.610166 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:29:56.610176 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:29:56.610186 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:29:56.610195 | orchestrator | ok: [testbed-manager] 2026-03-09 00:29:56.610205 | orchestrator | 2026-03-09 00:29:56.610213 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-03-09 00:29:56.610223 | orchestrator | Monday 09 March 2026 00:27:32 +0000 (0:00:00.253) 0:01:10.821 ********** 2026-03-09 00:29:56.610232 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:29:56.610241 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:29:56.610251 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:29:56.610260 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:29:56.610269 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:29:56.610278 | orchestrator | ok: [testbed-manager] 2026-03-09 00:29:56.610287 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:29:56.610297 | orchestrator | 2026-03-09 00:29:56.610306 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-03-09 00:29:56.610315 | orchestrator | Monday 09 March 2026 00:27:33 +0000 (0:00:01.180) 0:01:12.002 ********** 2026-03-09 00:29:56.610325 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:29:56.610334 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:29:56.610344 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:29:56.610354 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:29:56.610364 | orchestrator | changed: [testbed-manager] 2026-03-09 00:29:56.610373 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:29:56.610382 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:29:56.610392 | orchestrator | 2026-03-09 00:29:56.610401 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-03-09 00:29:56.610410 | orchestrator | Monday 09 March 2026 00:27:35 +0000 (0:00:01.747) 0:01:13.749 ********** 2026-03-09 00:29:56.610420 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:29:56.610429 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:29:56.610439 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:29:56.610448 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:29:56.610457 | orchestrator | ok: [testbed-manager] 2026-03-09 00:29:56.610466 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:29:56.610476 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:29:56.610485 | orchestrator | 2026-03-09 00:29:56.610495 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-03-09 00:29:56.610508 | orchestrator | Monday 09 March 2026 00:27:37 +0000 (0:00:02.471) 0:01:16.221 ********** 2026-03-09 00:29:56.610522 | orchestrator | ok: [testbed-manager] 2026-03-09 00:29:56.610535 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:29:56.610548 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:29:56.610574 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:29:56.610586 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:29:56.610599 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:29:56.610611 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:29:56.610623 | orchestrator | 2026-03-09 00:29:56.610636 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-03-09 00:29:56.610649 | orchestrator | Monday 09 March 2026 00:28:10 +0000 (0:00:32.926) 0:01:49.147 ********** 2026-03-09 00:29:56.610662 | orchestrator | changed: [testbed-manager] 2026-03-09 00:29:56.610673 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:29:56.610685 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:29:56.610697 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:29:56.610711 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:29:56.610725 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:29:56.610737 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:29:56.610750 | orchestrator | 2026-03-09 00:29:56.610763 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-03-09 00:29:56.610777 | orchestrator | Monday 09 March 2026 00:29:39 +0000 (0:01:28.775) 0:03:17.922 ********** 2026-03-09 00:29:56.610789 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:29:56.610801 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:29:56.610814 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:29:56.610827 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:29:56.610843 | orchestrator | ok: [testbed-manager] 2026-03-09 00:29:56.610857 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:29:56.610869 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:29:56.610883 | orchestrator | 2026-03-09 00:29:56.610893 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-03-09 00:29:56.610901 | orchestrator | Monday 09 March 2026 00:29:41 +0000 (0:00:01.945) 0:03:19.868 ********** 2026-03-09 00:29:56.610939 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:29:56.610948 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:29:56.610956 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:29:56.610964 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:29:56.610971 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:29:56.610979 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:29:56.610987 | orchestrator | changed: [testbed-manager] 2026-03-09 00:29:56.610995 | orchestrator | 2026-03-09 00:29:56.611003 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-03-09 00:29:56.611011 | orchestrator | Monday 09 March 2026 00:29:55 +0000 (0:00:14.153) 0:03:34.022 ********** 2026-03-09 00:29:56.611054 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-03-09 00:29:56.611073 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-03-09 00:29:56.611086 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-03-09 00:29:56.611108 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-09 00:29:56.611117 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-09 00:29:56.611133 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-03-09 00:29:56.611141 | orchestrator | 2026-03-09 00:29:56.611150 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-03-09 00:29:56.611158 | orchestrator | Monday 09 March 2026 00:29:55 +0000 (0:00:00.494) 0:03:34.517 ********** 2026-03-09 00:29:56.611166 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-09 00:29:56.611174 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-09 00:29:56.611185 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:29:56.611193 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-09 00:29:56.611201 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:29:56.611209 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:29:56.611217 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-09 00:29:56.611224 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:29:56.611232 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-09 00:29:56.611240 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-09 00:29:56.611248 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-09 00:29:56.611255 | orchestrator | 2026-03-09 00:29:56.611263 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-03-09 00:29:56.611271 | orchestrator | Monday 09 March 2026 00:29:56 +0000 (0:00:00.678) 0:03:35.195 ********** 2026-03-09 00:29:56.611279 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-09 00:29:56.611288 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-09 00:29:56.611296 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-09 00:29:56.611304 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-09 00:29:56.611312 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-09 00:29:56.611325 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-09 00:30:02.899777 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-09 00:30:02.899952 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-09 00:30:02.899975 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-09 00:30:02.899988 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-09 00:30:02.899999 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-09 00:30:02.900034 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-09 00:30:02.900046 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-09 00:30:02.900058 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-09 00:30:02.900068 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-09 00:30:02.900079 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-09 00:30:02.900090 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-09 00:30:02.900101 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-09 00:30:02.900112 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-09 00:30:02.900122 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-09 00:30:02.900133 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-09 00:30:02.900144 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-09 00:30:02.900159 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-09 00:30:02.900178 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-09 00:30:02.900195 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-09 00:30:02.900214 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-09 00:30:02.900234 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-09 00:30:02.900252 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-09 00:30:02.900263 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-09 00:30:02.900274 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-09 00:30:02.900287 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:30:02.900302 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:30:02.900315 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:30:02.900328 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-09 00:30:02.900341 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-09 00:30:02.900369 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-09 00:30:02.900383 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-09 00:30:02.900395 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-09 00:30:02.900408 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-09 00:30:02.900421 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-09 00:30:02.900492 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-09 00:30:02.900512 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-09 00:30:02.900530 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-09 00:30:02.900548 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:30:02.900570 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-09 00:30:02.900605 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-09 00:30:02.900625 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-09 00:30:02.900679 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-09 00:30:02.900698 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-09 00:30:02.900742 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-09 00:30:02.900762 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-09 00:30:02.900781 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-09 00:30:02.900792 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-09 00:30:02.900803 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-09 00:30:02.900814 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-09 00:30:02.900824 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-09 00:30:02.900835 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-09 00:30:02.900853 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-09 00:30:02.900872 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-09 00:30:02.900889 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-09 00:30:02.900933 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-09 00:30:02.900952 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-09 00:30:02.900969 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-09 00:30:02.900984 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-09 00:30:02.901003 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-09 00:30:02.901020 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-09 00:30:02.901038 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-09 00:30:02.901055 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-09 00:30:02.901073 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-09 00:30:02.901091 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-09 00:30:02.901110 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-09 00:30:02.901130 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-09 00:30:02.901149 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-09 00:30:02.901167 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-09 00:30:02.901186 | orchestrator | 2026-03-09 00:30:02.901207 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-03-09 00:30:02.901226 | orchestrator | Monday 09 March 2026 00:30:01 +0000 (0:00:05.219) 0:03:40.414 ********** 2026-03-09 00:30:02.901245 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-09 00:30:02.901277 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-09 00:30:02.901307 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-09 00:30:02.901327 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-09 00:30:02.901344 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-09 00:30:02.901362 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-09 00:30:02.901383 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-09 00:30:02.901401 | orchestrator | 2026-03-09 00:30:02.901419 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-03-09 00:30:02.901435 | orchestrator | Monday 09 March 2026 00:30:02 +0000 (0:00:00.724) 0:03:41.139 ********** 2026-03-09 00:30:02.901453 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-09 00:30:02.901473 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:30:02.901493 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-09 00:30:02.901512 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-09 00:30:02.901531 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:30:02.901550 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:30:02.901569 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-09 00:30:02.901589 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:30:02.901608 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-09 00:30:02.901627 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-09 00:30:02.901673 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-09 00:30:18.085938 | orchestrator | 2026-03-09 00:30:18.086077 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-03-09 00:30:18.086091 | orchestrator | Monday 09 March 2026 00:30:02 +0000 (0:00:00.459) 0:03:41.598 ********** 2026-03-09 00:30:18.086097 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-09 00:30:18.086104 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-09 00:30:18.086110 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:30:18.086117 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-09 00:30:18.086122 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:30:18.086127 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:30:18.086133 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-09 00:30:18.086138 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:30:18.086143 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-09 00:30:18.086148 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-09 00:30:18.086154 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-09 00:30:18.086159 | orchestrator | 2026-03-09 00:30:18.086167 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-03-09 00:30:18.086178 | orchestrator | Monday 09 March 2026 00:30:03 +0000 (0:00:00.621) 0:03:42.219 ********** 2026-03-09 00:30:18.086190 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-09 00:30:18.086198 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-09 00:30:18.086228 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:30:18.086237 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-09 00:30:18.086245 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:30:18.086253 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:30:18.086261 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-09 00:30:18.086269 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:30:18.086278 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-09 00:30:18.086286 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-09 00:30:18.086293 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-09 00:30:18.086301 | orchestrator | 2026-03-09 00:30:18.086308 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-03-09 00:30:18.086316 | orchestrator | Monday 09 March 2026 00:30:04 +0000 (0:00:00.589) 0:03:42.809 ********** 2026-03-09 00:30:18.086323 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:30:18.086330 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:30:18.086338 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:30:18.086346 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:30:18.086354 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:30:18.086363 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:30:18.086371 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:30:18.086379 | orchestrator | 2026-03-09 00:30:18.086388 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-03-09 00:30:18.086396 | orchestrator | Monday 09 March 2026 00:30:04 +0000 (0:00:00.422) 0:03:43.232 ********** 2026-03-09 00:30:18.086405 | orchestrator | ok: [testbed-manager] 2026-03-09 00:30:18.086414 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:30:18.086421 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:30:18.086429 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:30:18.086436 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:30:18.086443 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:30:18.086452 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:30:18.086459 | orchestrator | 2026-03-09 00:30:18.086467 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-03-09 00:30:18.086477 | orchestrator | Monday 09 March 2026 00:30:10 +0000 (0:00:05.814) 0:03:49.047 ********** 2026-03-09 00:30:18.086486 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-03-09 00:30:18.086495 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-03-09 00:30:18.086503 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:30:18.086511 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-03-09 00:30:18.086519 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:30:18.086527 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:30:18.086536 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-03-09 00:30:18.086545 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-03-09 00:30:18.086554 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:30:18.086562 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-03-09 00:30:18.086570 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:30:18.086579 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:30:18.086586 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-03-09 00:30:18.086594 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:30:18.086601 | orchestrator | 2026-03-09 00:30:18.086610 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-03-09 00:30:18.086618 | orchestrator | Monday 09 March 2026 00:30:10 +0000 (0:00:00.279) 0:03:49.326 ********** 2026-03-09 00:30:18.086626 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-03-09 00:30:18.086634 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-03-09 00:30:18.086652 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-03-09 00:30:18.086682 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-03-09 00:30:18.086691 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-03-09 00:30:18.086699 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-03-09 00:30:18.086707 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-03-09 00:30:18.086715 | orchestrator | 2026-03-09 00:30:18.086723 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-03-09 00:30:18.086731 | orchestrator | Monday 09 March 2026 00:30:12 +0000 (0:00:02.180) 0:03:51.507 ********** 2026-03-09 00:30:18.086742 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-03-09 00:30:18.086752 | orchestrator | 2026-03-09 00:30:18.086777 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-03-09 00:30:18.086786 | orchestrator | Monday 09 March 2026 00:30:13 +0000 (0:00:00.501) 0:03:52.008 ********** 2026-03-09 00:30:18.086795 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:30:18.086804 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:30:18.086812 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:30:18.086840 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:30:18.086849 | orchestrator | ok: [testbed-manager] 2026-03-09 00:30:18.086857 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:30:18.086880 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:30:18.086889 | orchestrator | 2026-03-09 00:30:18.086898 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-03-09 00:30:18.086906 | orchestrator | Monday 09 March 2026 00:30:15 +0000 (0:00:02.117) 0:03:54.125 ********** 2026-03-09 00:30:18.086914 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:30:18.086922 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:30:18.086931 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:30:18.086939 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:30:18.086947 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:30:18.086955 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:30:18.086964 | orchestrator | ok: [testbed-manager] 2026-03-09 00:30:18.086972 | orchestrator | 2026-03-09 00:30:18.086980 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-03-09 00:30:18.086989 | orchestrator | Monday 09 March 2026 00:30:16 +0000 (0:00:00.696) 0:03:54.822 ********** 2026-03-09 00:30:18.086995 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:30:18.087000 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:30:18.087006 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:30:18.087011 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:30:18.087016 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:30:18.087021 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:30:18.087026 | orchestrator | changed: [testbed-manager] 2026-03-09 00:30:18.087031 | orchestrator | 2026-03-09 00:30:18.087036 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-03-09 00:30:18.087041 | orchestrator | Monday 09 March 2026 00:30:16 +0000 (0:00:00.656) 0:03:55.478 ********** 2026-03-09 00:30:18.087046 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:30:18.087052 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:30:18.087057 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:30:18.087062 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:30:18.087067 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:30:18.087072 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:30:18.087077 | orchestrator | ok: [testbed-manager] 2026-03-09 00:30:18.087082 | orchestrator | 2026-03-09 00:30:18.087087 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-03-09 00:30:18.087092 | orchestrator | Monday 09 March 2026 00:30:17 +0000 (0:00:00.665) 0:03:56.143 ********** 2026-03-09 00:30:18.087106 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773014743.7556088, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:30:18.087123 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773014740.9637663, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:30:18.087129 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773014732.3103356, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:30:18.087153 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773014743.4596128, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:30:23.455527 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773014719.8041892, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:30:23.455663 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773014721.0222118, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:30:23.455682 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773014721.2267985, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:30:23.455713 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:30:23.455750 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:30:23.455762 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:30:23.455774 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:30:23.455816 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:30:23.455829 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:30:23.455840 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:30:23.455853 | orchestrator | 2026-03-09 00:30:23.455947 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-03-09 00:30:23.455981 | orchestrator | Monday 09 March 2026 00:30:18 +0000 (0:00:01.045) 0:03:57.189 ********** 2026-03-09 00:30:23.455999 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:30:23.456018 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:30:23.456037 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:30:23.456057 | orchestrator | changed: [testbed-manager] 2026-03-09 00:30:23.456076 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:30:23.456096 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:30:23.456110 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:30:23.456124 | orchestrator | 2026-03-09 00:30:23.456140 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-03-09 00:30:23.456158 | orchestrator | Monday 09 March 2026 00:30:19 +0000 (0:00:01.062) 0:03:58.252 ********** 2026-03-09 00:30:23.456177 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:30:23.456195 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:30:23.456222 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:30:23.456241 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:30:23.456257 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:30:23.456270 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:30:23.456284 | orchestrator | changed: [testbed-manager] 2026-03-09 00:30:23.456296 | orchestrator | 2026-03-09 00:30:23.456309 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-03-09 00:30:23.456320 | orchestrator | Monday 09 March 2026 00:30:20 +0000 (0:00:01.178) 0:03:59.430 ********** 2026-03-09 00:30:23.456331 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:30:23.456341 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:30:23.456352 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:30:23.456363 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:30:23.456373 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:30:23.456384 | orchestrator | changed: [testbed-manager] 2026-03-09 00:30:23.456394 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:30:23.456406 | orchestrator | 2026-03-09 00:30:23.456417 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-03-09 00:30:23.456428 | orchestrator | Monday 09 March 2026 00:30:21 +0000 (0:00:01.084) 0:04:00.514 ********** 2026-03-09 00:30:23.456438 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:30:23.456449 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:30:23.456459 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:30:23.456470 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:30:23.456480 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:30:23.456491 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:30:23.456501 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:30:23.456512 | orchestrator | 2026-03-09 00:30:23.456523 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-03-09 00:30:23.456534 | orchestrator | Monday 09 March 2026 00:30:22 +0000 (0:00:00.378) 0:04:00.893 ********** 2026-03-09 00:30:23.456545 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:30:23.456556 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:30:23.456567 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:30:23.456577 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:30:23.456588 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:30:23.456599 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:30:23.456609 | orchestrator | ok: [testbed-manager] 2026-03-09 00:30:23.456620 | orchestrator | 2026-03-09 00:30:23.456631 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-03-09 00:30:23.456642 | orchestrator | Monday 09 March 2026 00:30:22 +0000 (0:00:00.765) 0:04:01.659 ********** 2026-03-09 00:30:23.456655 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-03-09 00:30:23.456668 | orchestrator | 2026-03-09 00:30:23.456679 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-03-09 00:30:23.456711 | orchestrator | Monday 09 March 2026 00:30:23 +0000 (0:00:00.463) 0:04:02.122 ********** 2026-03-09 00:31:46.901052 | orchestrator | ok: [testbed-manager] 2026-03-09 00:31:46.901149 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:31:46.901160 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:31:46.901168 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:31:46.901176 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:31:46.901184 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:31:46.901192 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:31:46.901200 | orchestrator | 2026-03-09 00:31:46.901209 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-03-09 00:31:46.901218 | orchestrator | Monday 09 March 2026 00:30:32 +0000 (0:00:09.516) 0:04:11.639 ********** 2026-03-09 00:31:46.901226 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:31:46.901234 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:31:46.901242 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:31:46.901249 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:31:46.901256 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:31:46.901264 | orchestrator | ok: [testbed-manager] 2026-03-09 00:31:46.901271 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:31:46.901279 | orchestrator | 2026-03-09 00:31:46.901286 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-03-09 00:31:46.901294 | orchestrator | Monday 09 March 2026 00:30:34 +0000 (0:00:01.345) 0:04:12.984 ********** 2026-03-09 00:31:46.901302 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:31:46.901309 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:31:46.901317 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:31:46.901324 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:31:46.901331 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:31:46.901339 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:31:46.901347 | orchestrator | ok: [testbed-manager] 2026-03-09 00:31:46.901354 | orchestrator | 2026-03-09 00:31:46.901362 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-03-09 00:31:46.901369 | orchestrator | Monday 09 March 2026 00:30:35 +0000 (0:00:01.352) 0:04:14.337 ********** 2026-03-09 00:31:46.901377 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:31:46.901384 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:31:46.901391 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:31:46.901399 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:31:46.901406 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:31:46.901414 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:31:46.901421 | orchestrator | ok: [testbed-manager] 2026-03-09 00:31:46.901429 | orchestrator | 2026-03-09 00:31:46.901436 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-03-09 00:31:46.901445 | orchestrator | Monday 09 March 2026 00:30:35 +0000 (0:00:00.322) 0:04:14.659 ********** 2026-03-09 00:31:46.901452 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:31:46.901460 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:31:46.901467 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:31:46.901474 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:31:46.901482 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:31:46.901489 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:31:46.901497 | orchestrator | ok: [testbed-manager] 2026-03-09 00:31:46.901504 | orchestrator | 2026-03-09 00:31:46.901512 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-03-09 00:31:46.901520 | orchestrator | Monday 09 March 2026 00:30:36 +0000 (0:00:00.343) 0:04:15.002 ********** 2026-03-09 00:31:46.901527 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:31:46.901534 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:31:46.901542 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:31:46.901592 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:31:46.901600 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:31:46.901608 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:31:46.901617 | orchestrator | ok: [testbed-manager] 2026-03-09 00:31:46.901626 | orchestrator | 2026-03-09 00:31:46.901634 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-03-09 00:31:46.901665 | orchestrator | Monday 09 March 2026 00:30:36 +0000 (0:00:00.333) 0:04:15.336 ********** 2026-03-09 00:31:46.901674 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:31:46.901683 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:31:46.901691 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:31:46.901700 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:31:46.901709 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:31:46.901718 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:31:46.901727 | orchestrator | ok: [testbed-manager] 2026-03-09 00:31:46.901736 | orchestrator | 2026-03-09 00:31:46.901744 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-03-09 00:31:46.901753 | orchestrator | Monday 09 March 2026 00:30:41 +0000 (0:00:05.238) 0:04:20.574 ********** 2026-03-09 00:31:46.901764 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-03-09 00:31:46.901776 | orchestrator | 2026-03-09 00:31:46.901784 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-03-09 00:31:46.901794 | orchestrator | Monday 09 March 2026 00:30:42 +0000 (0:00:00.432) 0:04:21.006 ********** 2026-03-09 00:31:46.901803 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-03-09 00:31:46.901811 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-03-09 00:31:46.901820 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-03-09 00:31:46.901828 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:31:46.901835 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-03-09 00:31:46.901843 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-03-09 00:31:46.901850 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-03-09 00:31:46.901857 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:31:46.901865 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-03-09 00:31:46.901872 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:31:46.901880 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-03-09 00:31:46.901887 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-03-09 00:31:46.901894 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:31:46.901902 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-03-09 00:31:46.901909 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-03-09 00:31:46.901917 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-03-09 00:31:46.901939 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:31:46.901947 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:31:46.901955 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-03-09 00:31:46.901962 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-03-09 00:31:46.901969 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:31:46.901977 | orchestrator | 2026-03-09 00:31:46.901984 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-03-09 00:31:46.902005 | orchestrator | Monday 09 March 2026 00:30:42 +0000 (0:00:00.376) 0:04:21.383 ********** 2026-03-09 00:31:46.902056 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-03-09 00:31:46.902067 | orchestrator | 2026-03-09 00:31:46.902074 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-03-09 00:31:46.902082 | orchestrator | Monday 09 March 2026 00:30:43 +0000 (0:00:00.461) 0:04:21.844 ********** 2026-03-09 00:31:46.902089 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-03-09 00:31:46.902096 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-03-09 00:31:46.902103 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:31:46.902117 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-03-09 00:31:46.902124 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:31:46.902132 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-03-09 00:31:46.902139 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:31:46.902146 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-03-09 00:31:46.902153 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:31:46.902160 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-03-09 00:31:46.902167 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:31:46.902174 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:31:46.902181 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-03-09 00:31:46.902188 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:31:46.902203 | orchestrator | 2026-03-09 00:31:46.902210 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-03-09 00:31:46.902217 | orchestrator | Monday 09 March 2026 00:30:43 +0000 (0:00:00.365) 0:04:22.210 ********** 2026-03-09 00:31:46.902225 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-03-09 00:31:46.902232 | orchestrator | 2026-03-09 00:31:46.902239 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-03-09 00:31:46.902251 | orchestrator | Monday 09 March 2026 00:30:43 +0000 (0:00:00.449) 0:04:22.660 ********** 2026-03-09 00:31:46.902258 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:31:46.902265 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:31:46.902272 | orchestrator | changed: [testbed-manager] 2026-03-09 00:31:46.902280 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:31:46.902287 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:31:46.902294 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:31:46.902301 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:31:46.902308 | orchestrator | 2026-03-09 00:31:46.902316 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-03-09 00:31:46.902323 | orchestrator | Monday 09 March 2026 00:31:20 +0000 (0:00:36.938) 0:04:59.599 ********** 2026-03-09 00:31:46.902330 | orchestrator | changed: [testbed-manager] 2026-03-09 00:31:46.902337 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:31:46.902344 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:31:46.902351 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:31:46.902359 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:31:46.902366 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:31:46.902373 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:31:46.902380 | orchestrator | 2026-03-09 00:31:46.902387 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-03-09 00:31:46.902394 | orchestrator | Monday 09 March 2026 00:31:29 +0000 (0:00:09.046) 0:05:08.646 ********** 2026-03-09 00:31:46.902402 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:31:46.902409 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:31:46.902416 | orchestrator | changed: [testbed-manager] 2026-03-09 00:31:46.902423 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:31:46.902430 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:31:46.902437 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:31:46.902444 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:31:46.902452 | orchestrator | 2026-03-09 00:31:46.902459 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-03-09 00:31:46.902466 | orchestrator | Monday 09 March 2026 00:31:38 +0000 (0:00:08.689) 0:05:17.335 ********** 2026-03-09 00:31:46.902473 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:31:46.902480 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:31:46.902488 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:31:46.902495 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:31:46.902507 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:31:46.902514 | orchestrator | ok: [testbed-manager] 2026-03-09 00:31:46.902521 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:31:46.902528 | orchestrator | 2026-03-09 00:31:46.902536 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-03-09 00:31:46.902543 | orchestrator | Monday 09 March 2026 00:31:40 +0000 (0:00:02.105) 0:05:19.441 ********** 2026-03-09 00:31:46.902573 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:31:46.902580 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:31:46.902587 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:31:46.902595 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:31:46.902602 | orchestrator | changed: [testbed-manager] 2026-03-09 00:31:46.902609 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:31:46.902616 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:31:46.902623 | orchestrator | 2026-03-09 00:31:46.902636 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-03-09 00:31:58.211238 | orchestrator | Monday 09 March 2026 00:31:46 +0000 (0:00:06.124) 0:05:25.565 ********** 2026-03-09 00:31:58.211354 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-03-09 00:31:58.211373 | orchestrator | 2026-03-09 00:31:58.211386 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-03-09 00:31:58.211398 | orchestrator | Monday 09 March 2026 00:31:47 +0000 (0:00:00.467) 0:05:26.032 ********** 2026-03-09 00:31:58.211409 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:31:58.211420 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:31:58.211431 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:31:58.211442 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:31:58.211452 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:31:58.211463 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:31:58.211474 | orchestrator | changed: [testbed-manager] 2026-03-09 00:31:58.211485 | orchestrator | 2026-03-09 00:31:58.211496 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-03-09 00:31:58.211507 | orchestrator | Monday 09 March 2026 00:31:48 +0000 (0:00:00.727) 0:05:26.760 ********** 2026-03-09 00:31:58.211523 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:31:58.211544 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:31:58.211623 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:31:58.211641 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:31:58.211660 | orchestrator | ok: [testbed-manager] 2026-03-09 00:31:58.211677 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:31:58.211694 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:31:58.211713 | orchestrator | 2026-03-09 00:31:58.211732 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-03-09 00:31:58.211751 | orchestrator | Monday 09 March 2026 00:31:49 +0000 (0:00:01.669) 0:05:28.429 ********** 2026-03-09 00:31:58.211770 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:31:58.211790 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:31:58.211809 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:31:58.211829 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:31:58.211846 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:31:58.211860 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:31:58.211878 | orchestrator | changed: [testbed-manager] 2026-03-09 00:31:58.211903 | orchestrator | 2026-03-09 00:31:58.211926 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-03-09 00:31:58.211942 | orchestrator | Monday 09 March 2026 00:31:50 +0000 (0:00:00.840) 0:05:29.270 ********** 2026-03-09 00:31:58.211959 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:31:58.211976 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:31:58.211993 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:31:58.212010 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:31:58.212029 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:31:58.212082 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:31:58.212096 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:31:58.212106 | orchestrator | 2026-03-09 00:31:58.212132 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-03-09 00:31:58.212143 | orchestrator | Monday 09 March 2026 00:31:50 +0000 (0:00:00.300) 0:05:29.570 ********** 2026-03-09 00:31:58.212154 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:31:58.212165 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:31:58.212175 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:31:58.212186 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:31:58.212196 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:31:58.212207 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:31:58.212217 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:31:58.212228 | orchestrator | 2026-03-09 00:31:58.212239 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-03-09 00:31:58.212250 | orchestrator | Monday 09 March 2026 00:31:51 +0000 (0:00:00.417) 0:05:29.988 ********** 2026-03-09 00:31:58.212261 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:31:58.212271 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:31:58.212282 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:31:58.212292 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:31:58.212303 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:31:58.212313 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:31:58.212324 | orchestrator | ok: [testbed-manager] 2026-03-09 00:31:58.212334 | orchestrator | 2026-03-09 00:31:58.212345 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-03-09 00:31:58.212356 | orchestrator | Monday 09 March 2026 00:31:51 +0000 (0:00:00.290) 0:05:30.279 ********** 2026-03-09 00:31:58.212367 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:31:58.212377 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:31:58.212388 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:31:58.212398 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:31:58.212409 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:31:58.212419 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:31:58.212429 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:31:58.212440 | orchestrator | 2026-03-09 00:31:58.212451 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-03-09 00:31:58.212462 | orchestrator | Monday 09 March 2026 00:31:51 +0000 (0:00:00.309) 0:05:30.588 ********** 2026-03-09 00:31:58.212473 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:31:58.212483 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:31:58.212496 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:31:58.212515 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:31:58.212533 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:31:58.212549 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:31:58.212590 | orchestrator | ok: [testbed-manager] 2026-03-09 00:31:58.212607 | orchestrator | 2026-03-09 00:31:58.212627 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-03-09 00:31:58.212646 | orchestrator | Monday 09 March 2026 00:31:52 +0000 (0:00:00.335) 0:05:30.924 ********** 2026-03-09 00:31:58.212664 | orchestrator | ok: [testbed-node-3] =>  2026-03-09 00:31:58.212680 | orchestrator |  docker_version: 5:27.5.1 2026-03-09 00:31:58.212691 | orchestrator | ok: [testbed-node-4] =>  2026-03-09 00:31:58.212702 | orchestrator |  docker_version: 5:27.5.1 2026-03-09 00:31:58.212712 | orchestrator | ok: [testbed-node-5] =>  2026-03-09 00:31:58.212723 | orchestrator |  docker_version: 5:27.5.1 2026-03-09 00:31:58.212734 | orchestrator | ok: [testbed-node-0] =>  2026-03-09 00:31:58.212744 | orchestrator |  docker_version: 5:27.5.1 2026-03-09 00:31:58.212775 | orchestrator | ok: [testbed-node-1] =>  2026-03-09 00:31:58.212787 | orchestrator |  docker_version: 5:27.5.1 2026-03-09 00:31:58.212797 | orchestrator | ok: [testbed-node-2] =>  2026-03-09 00:31:58.212808 | orchestrator |  docker_version: 5:27.5.1 2026-03-09 00:31:58.212819 | orchestrator | ok: [testbed-manager] =>  2026-03-09 00:31:58.212829 | orchestrator |  docker_version: 5:27.5.1 2026-03-09 00:31:58.212850 | orchestrator | 2026-03-09 00:31:58.212861 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-03-09 00:31:58.212872 | orchestrator | Monday 09 March 2026 00:31:52 +0000 (0:00:00.300) 0:05:31.224 ********** 2026-03-09 00:31:58.212882 | orchestrator | ok: [testbed-node-3] =>  2026-03-09 00:31:58.212893 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-09 00:31:58.212903 | orchestrator | ok: [testbed-node-4] =>  2026-03-09 00:31:58.212914 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-09 00:31:58.212924 | orchestrator | ok: [testbed-node-5] =>  2026-03-09 00:31:58.212934 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-09 00:31:58.212945 | orchestrator | ok: [testbed-node-0] =>  2026-03-09 00:31:58.212955 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-09 00:31:58.212966 | orchestrator | ok: [testbed-node-1] =>  2026-03-09 00:31:58.212976 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-09 00:31:58.212987 | orchestrator | ok: [testbed-node-2] =>  2026-03-09 00:31:58.212997 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-09 00:31:58.213008 | orchestrator | ok: [testbed-manager] =>  2026-03-09 00:31:58.213018 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-09 00:31:58.213029 | orchestrator | 2026-03-09 00:31:58.213039 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-03-09 00:31:58.213050 | orchestrator | Monday 09 March 2026 00:31:52 +0000 (0:00:00.345) 0:05:31.570 ********** 2026-03-09 00:31:58.213061 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:31:58.213071 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:31:58.213082 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:31:58.213092 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:31:58.213103 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:31:58.213113 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:31:58.213124 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:31:58.213135 | orchestrator | 2026-03-09 00:31:58.213145 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-03-09 00:31:58.213156 | orchestrator | Monday 09 March 2026 00:31:53 +0000 (0:00:00.288) 0:05:31.858 ********** 2026-03-09 00:31:58.213166 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:31:58.213177 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:31:58.213187 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:31:58.213198 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:31:58.213208 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:31:58.213219 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:31:58.213229 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:31:58.213240 | orchestrator | 2026-03-09 00:31:58.213250 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-03-09 00:31:58.213261 | orchestrator | Monday 09 March 2026 00:31:53 +0000 (0:00:00.311) 0:05:32.170 ********** 2026-03-09 00:31:58.213281 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-03-09 00:31:58.213295 | orchestrator | 2026-03-09 00:31:58.213306 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-03-09 00:31:58.213317 | orchestrator | Monday 09 March 2026 00:31:54 +0000 (0:00:00.542) 0:05:32.712 ********** 2026-03-09 00:31:58.213327 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:31:58.213338 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:31:58.213348 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:31:58.213359 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:31:58.213369 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:31:58.213380 | orchestrator | ok: [testbed-manager] 2026-03-09 00:31:58.213390 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:31:58.213401 | orchestrator | 2026-03-09 00:31:58.213412 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-03-09 00:31:58.213422 | orchestrator | Monday 09 March 2026 00:31:54 +0000 (0:00:00.837) 0:05:33.549 ********** 2026-03-09 00:31:58.213439 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:31:58.213450 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:31:58.213461 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:31:58.213471 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:31:58.213482 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:31:58.213492 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:31:58.213503 | orchestrator | ok: [testbed-manager] 2026-03-09 00:31:58.213513 | orchestrator | 2026-03-09 00:31:58.213524 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-03-09 00:31:58.213536 | orchestrator | Monday 09 March 2026 00:31:57 +0000 (0:00:02.938) 0:05:36.488 ********** 2026-03-09 00:31:58.213546 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-03-09 00:31:58.213665 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-03-09 00:31:58.213687 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-03-09 00:31:58.213700 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-03-09 00:31:58.213711 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-03-09 00:31:58.213722 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-03-09 00:31:58.213733 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:31:58.213743 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-03-09 00:31:58.213754 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-03-09 00:31:58.213765 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-03-09 00:31:58.213776 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:31:58.213787 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-03-09 00:31:58.213797 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-03-09 00:31:58.213808 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-03-09 00:31:58.213819 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:31:58.213829 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-03-09 00:31:58.213850 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-03-09 00:33:01.567949 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-03-09 00:33:01.568046 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:33:01.568058 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-03-09 00:33:01.568067 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-03-09 00:33:01.568075 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-03-09 00:33:01.568083 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:33:01.568090 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:33:01.568098 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-03-09 00:33:01.568105 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-03-09 00:33:01.568113 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-03-09 00:33:01.568120 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:33:01.568128 | orchestrator | 2026-03-09 00:33:01.568137 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-03-09 00:33:01.568145 | orchestrator | Monday 09 March 2026 00:31:58 +0000 (0:00:00.637) 0:05:37.125 ********** 2026-03-09 00:33:01.568153 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:01.568161 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:01.568168 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:01.568176 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:01.568183 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:01.568190 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:01.568198 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:01.568205 | orchestrator | 2026-03-09 00:33:01.568212 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-03-09 00:33:01.568220 | orchestrator | Monday 09 March 2026 00:32:05 +0000 (0:00:06.682) 0:05:43.808 ********** 2026-03-09 00:33:01.568227 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:01.568253 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:01.568263 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:01.568289 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:01.568300 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:01.568313 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:01.568324 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:01.568336 | orchestrator | 2026-03-09 00:33:01.568348 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-03-09 00:33:01.568360 | orchestrator | Monday 09 March 2026 00:32:06 +0000 (0:00:01.086) 0:05:44.894 ********** 2026-03-09 00:33:01.568371 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:01.568382 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:01.568392 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:01.568404 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:01.568415 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:01.568426 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:01.568436 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:01.568448 | orchestrator | 2026-03-09 00:33:01.568460 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-03-09 00:33:01.568472 | orchestrator | Monday 09 March 2026 00:32:14 +0000 (0:00:08.570) 0:05:53.464 ********** 2026-03-09 00:33:01.568484 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:01.568510 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:01.568522 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:01.568534 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:01.568547 | orchestrator | changed: [testbed-manager] 2026-03-09 00:33:01.568559 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:01.568594 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:01.568605 | orchestrator | 2026-03-09 00:33:01.568617 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-03-09 00:33:01.568630 | orchestrator | Monday 09 March 2026 00:32:18 +0000 (0:00:03.600) 0:05:57.064 ********** 2026-03-09 00:33:01.568644 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:01.568657 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:01.568670 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:01.568684 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:01.568699 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:01.568713 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:01.568728 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:01.568742 | orchestrator | 2026-03-09 00:33:01.568756 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-03-09 00:33:01.568771 | orchestrator | Monday 09 March 2026 00:32:19 +0000 (0:00:01.285) 0:05:58.349 ********** 2026-03-09 00:33:01.568786 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:01.568800 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:01.568815 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:01.568829 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:01.568844 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:01.568858 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:01.568872 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:01.568887 | orchestrator | 2026-03-09 00:33:01.568900 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-03-09 00:33:01.568915 | orchestrator | Monday 09 March 2026 00:32:21 +0000 (0:00:01.492) 0:05:59.842 ********** 2026-03-09 00:33:01.568929 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:33:01.568942 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:33:01.568957 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:33:01.568970 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:33:01.568983 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:33:01.568998 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:33:01.569013 | orchestrator | changed: [testbed-manager] 2026-03-09 00:33:01.569027 | orchestrator | 2026-03-09 00:33:01.569042 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-03-09 00:33:01.569073 | orchestrator | Monday 09 March 2026 00:32:22 +0000 (0:00:01.085) 0:06:00.927 ********** 2026-03-09 00:33:01.569088 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:01.569102 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:01.569113 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:01.569121 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:01.569130 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:01.569139 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:01.569147 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:01.569155 | orchestrator | 2026-03-09 00:33:01.569164 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-03-09 00:33:01.569194 | orchestrator | Monday 09 March 2026 00:32:32 +0000 (0:00:09.746) 0:06:10.674 ********** 2026-03-09 00:33:01.569203 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:01.569212 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:01.569220 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:01.569229 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:01.569237 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:01.569246 | orchestrator | changed: [testbed-manager] 2026-03-09 00:33:01.569255 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:01.569263 | orchestrator | 2026-03-09 00:33:01.569272 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-03-09 00:33:01.569281 | orchestrator | Monday 09 March 2026 00:32:32 +0000 (0:00:00.884) 0:06:11.558 ********** 2026-03-09 00:33:01.569289 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:01.569297 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:01.569306 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:01.569314 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:01.569323 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:01.569331 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:01.569340 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:01.569348 | orchestrator | 2026-03-09 00:33:01.569357 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-03-09 00:33:01.569366 | orchestrator | Monday 09 March 2026 00:32:42 +0000 (0:00:09.764) 0:06:21.323 ********** 2026-03-09 00:33:01.569374 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:01.569383 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:01.569391 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:01.569400 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:01.569408 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:01.569417 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:01.569425 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:01.569434 | orchestrator | 2026-03-09 00:33:01.569442 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-03-09 00:33:01.569451 | orchestrator | Monday 09 March 2026 00:32:54 +0000 (0:00:11.939) 0:06:33.263 ********** 2026-03-09 00:33:01.569460 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-03-09 00:33:01.569469 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-03-09 00:33:01.569477 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-03-09 00:33:01.569486 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-03-09 00:33:01.569494 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-03-09 00:33:01.569503 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-03-09 00:33:01.569511 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-03-09 00:33:01.569520 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-03-09 00:33:01.569528 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-03-09 00:33:01.569537 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-03-09 00:33:01.569545 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-03-09 00:33:01.569554 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-03-09 00:33:01.569586 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-03-09 00:33:01.569596 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-03-09 00:33:01.569611 | orchestrator | 2026-03-09 00:33:01.569620 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-03-09 00:33:01.569629 | orchestrator | Monday 09 March 2026 00:32:55 +0000 (0:00:01.185) 0:06:34.448 ********** 2026-03-09 00:33:01.569638 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:33:01.569646 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:33:01.569655 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:33:01.569664 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:33:01.569673 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:33:01.569687 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:33:01.569702 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:33:01.569717 | orchestrator | 2026-03-09 00:33:01.569732 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-03-09 00:33:01.569798 | orchestrator | Monday 09 March 2026 00:32:56 +0000 (0:00:00.497) 0:06:34.946 ********** 2026-03-09 00:33:01.569815 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:01.569831 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:01.569846 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:01.569861 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:01.569870 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:01.569878 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:01.569887 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:01.569896 | orchestrator | 2026-03-09 00:33:01.569904 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-03-09 00:33:01.569914 | orchestrator | Monday 09 March 2026 00:33:00 +0000 (0:00:04.292) 0:06:39.238 ********** 2026-03-09 00:33:01.569923 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:33:01.569932 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:33:01.569940 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:33:01.569949 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:33:01.569957 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:33:01.569966 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:33:01.569974 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:33:01.569983 | orchestrator | 2026-03-09 00:33:01.569993 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-03-09 00:33:01.570002 | orchestrator | Monday 09 March 2026 00:33:01 +0000 (0:00:00.728) 0:06:39.967 ********** 2026-03-09 00:33:01.570010 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-03-09 00:33:01.570080 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-03-09 00:33:01.570089 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:33:01.570098 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-03-09 00:33:01.570107 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-03-09 00:33:01.570116 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:33:01.570124 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-03-09 00:33:01.570133 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-03-09 00:33:01.570142 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:33:01.570161 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-03-09 00:33:23.199078 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-03-09 00:33:23.199186 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:33:23.199202 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-03-09 00:33:23.199215 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-03-09 00:33:23.199226 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:33:23.199238 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-03-09 00:33:23.199249 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-03-09 00:33:23.199260 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:33:23.199271 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-03-09 00:33:23.199311 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-03-09 00:33:23.199322 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:33:23.199333 | orchestrator | 2026-03-09 00:33:23.199346 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-03-09 00:33:23.199357 | orchestrator | Monday 09 March 2026 00:33:01 +0000 (0:00:00.575) 0:06:40.543 ********** 2026-03-09 00:33:23.199368 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:33:23.199379 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:33:23.199390 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:33:23.199401 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:33:23.199412 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:33:23.199423 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:33:23.199433 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:33:23.199444 | orchestrator | 2026-03-09 00:33:23.199455 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-03-09 00:33:23.199466 | orchestrator | Monday 09 March 2026 00:33:02 +0000 (0:00:00.614) 0:06:41.158 ********** 2026-03-09 00:33:23.199477 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:33:23.199488 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:33:23.199499 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:33:23.199510 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:33:23.199521 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:33:23.199532 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:33:23.199542 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:33:23.199553 | orchestrator | 2026-03-09 00:33:23.199614 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-03-09 00:33:23.199637 | orchestrator | Monday 09 March 2026 00:33:03 +0000 (0:00:00.717) 0:06:41.876 ********** 2026-03-09 00:33:23.199657 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:33:23.199674 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:33:23.199687 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:33:23.199699 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:33:23.199712 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:33:23.199725 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:33:23.199737 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:33:23.199750 | orchestrator | 2026-03-09 00:33:23.199762 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-03-09 00:33:23.199790 | orchestrator | Monday 09 March 2026 00:33:03 +0000 (0:00:00.617) 0:06:42.493 ********** 2026-03-09 00:33:23.199803 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:33:23.199816 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:33:23.199829 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:33:23.199841 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:33:23.199854 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:23.199867 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:33:23.199879 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:33:23.199891 | orchestrator | 2026-03-09 00:33:23.199905 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-03-09 00:33:23.199917 | orchestrator | Monday 09 March 2026 00:33:06 +0000 (0:00:02.357) 0:06:44.850 ********** 2026-03-09 00:33:23.199930 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-03-09 00:33:23.199945 | orchestrator | 2026-03-09 00:33:23.199959 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-03-09 00:33:23.199972 | orchestrator | Monday 09 March 2026 00:33:07 +0000 (0:00:00.937) 0:06:45.788 ********** 2026-03-09 00:33:23.199985 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:23.199997 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:23.200011 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:23.200024 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:23.200037 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:23.200061 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:23.200072 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:23.200082 | orchestrator | 2026-03-09 00:33:23.200093 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-03-09 00:33:23.200104 | orchestrator | Monday 09 March 2026 00:33:08 +0000 (0:00:00.890) 0:06:46.678 ********** 2026-03-09 00:33:23.200115 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:23.200125 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:23.200136 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:23.200147 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:23.200158 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:23.200168 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:23.200179 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:23.200190 | orchestrator | 2026-03-09 00:33:23.200201 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-03-09 00:33:23.200212 | orchestrator | Monday 09 March 2026 00:33:08 +0000 (0:00:00.949) 0:06:47.628 ********** 2026-03-09 00:33:23.200223 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:23.200234 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:23.200244 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:23.200255 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:23.200265 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:23.200276 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:23.200287 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:23.200298 | orchestrator | 2026-03-09 00:33:23.200309 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-03-09 00:33:23.200338 | orchestrator | Monday 09 March 2026 00:33:10 +0000 (0:00:01.802) 0:06:49.431 ********** 2026-03-09 00:33:23.200349 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:33:23.200360 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:33:23.200371 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:33:23.200382 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:33:23.200392 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:33:23.200403 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:33:23.200413 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:33:23.200424 | orchestrator | 2026-03-09 00:33:23.200435 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-03-09 00:33:23.200446 | orchestrator | Monday 09 March 2026 00:33:12 +0000 (0:00:01.379) 0:06:50.810 ********** 2026-03-09 00:33:23.200456 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:23.200467 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:23.200478 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:23.200489 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:23.200499 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:23.200510 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:23.200521 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:23.200531 | orchestrator | 2026-03-09 00:33:23.200542 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-03-09 00:33:23.200553 | orchestrator | Monday 09 March 2026 00:33:13 +0000 (0:00:01.367) 0:06:52.178 ********** 2026-03-09 00:33:23.200603 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:23.200614 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:23.200625 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:23.200636 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:23.200647 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:23.200657 | orchestrator | changed: [testbed-manager] 2026-03-09 00:33:23.200668 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:23.200679 | orchestrator | 2026-03-09 00:33:23.200690 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-03-09 00:33:23.200700 | orchestrator | Monday 09 March 2026 00:33:14 +0000 (0:00:01.393) 0:06:53.571 ********** 2026-03-09 00:33:23.200712 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-03-09 00:33:23.200738 | orchestrator | 2026-03-09 00:33:23.200749 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-03-09 00:33:23.200760 | orchestrator | Monday 09 March 2026 00:33:15 +0000 (0:00:01.047) 0:06:54.619 ********** 2026-03-09 00:33:23.200771 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:33:23.200781 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:33:23.200792 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:33:23.200803 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:33:23.200814 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:33:23.200824 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:33:23.200835 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:23.200845 | orchestrator | 2026-03-09 00:33:23.200856 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-03-09 00:33:23.200868 | orchestrator | Monday 09 March 2026 00:33:17 +0000 (0:00:01.467) 0:06:56.086 ********** 2026-03-09 00:33:23.200878 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:33:23.200889 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:33:23.200900 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:33:23.200911 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:33:23.200921 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:33:23.200932 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:33:23.200942 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:23.200953 | orchestrator | 2026-03-09 00:33:23.200964 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-03-09 00:33:23.200975 | orchestrator | Monday 09 March 2026 00:33:18 +0000 (0:00:01.201) 0:06:57.288 ********** 2026-03-09 00:33:23.200985 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:33:23.200996 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:33:23.201007 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:33:23.201017 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:33:23.201028 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:33:23.201039 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:23.201049 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:33:23.201060 | orchestrator | 2026-03-09 00:33:23.201070 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-03-09 00:33:23.201081 | orchestrator | Monday 09 March 2026 00:33:20 +0000 (0:00:02.147) 0:06:59.436 ********** 2026-03-09 00:33:23.201092 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:33:23.201103 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:33:23.201113 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:33:23.201124 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:33:23.201135 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:33:23.201145 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:33:23.201156 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:23.201167 | orchestrator | 2026-03-09 00:33:23.201177 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-03-09 00:33:23.201188 | orchestrator | Monday 09 March 2026 00:33:22 +0000 (0:00:01.381) 0:07:00.818 ********** 2026-03-09 00:33:23.201199 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-03-09 00:33:23.201210 | orchestrator | 2026-03-09 00:33:23.201221 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-09 00:33:23.201232 | orchestrator | Monday 09 March 2026 00:33:23 +0000 (0:00:00.915) 0:07:01.733 ********** 2026-03-09 00:33:23.201242 | orchestrator | 2026-03-09 00:33:23.201253 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-09 00:33:23.201264 | orchestrator | Monday 09 March 2026 00:33:23 +0000 (0:00:00.041) 0:07:01.774 ********** 2026-03-09 00:33:23.201275 | orchestrator | 2026-03-09 00:33:23.201285 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-09 00:33:23.201296 | orchestrator | Monday 09 March 2026 00:33:23 +0000 (0:00:00.039) 0:07:01.813 ********** 2026-03-09 00:33:23.201315 | orchestrator | 2026-03-09 00:33:23.201326 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-09 00:33:23.201344 | orchestrator | Monday 09 March 2026 00:33:23 +0000 (0:00:00.046) 0:07:01.860 ********** 2026-03-09 00:33:51.295337 | orchestrator | 2026-03-09 00:33:51.295437 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-09 00:33:51.295448 | orchestrator | Monday 09 March 2026 00:33:23 +0000 (0:00:00.040) 0:07:01.900 ********** 2026-03-09 00:33:51.295454 | orchestrator | 2026-03-09 00:33:51.295459 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-09 00:33:51.295465 | orchestrator | Monday 09 March 2026 00:33:23 +0000 (0:00:00.040) 0:07:01.941 ********** 2026-03-09 00:33:51.295470 | orchestrator | 2026-03-09 00:33:51.295475 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-09 00:33:51.295481 | orchestrator | Monday 09 March 2026 00:33:23 +0000 (0:00:00.054) 0:07:01.995 ********** 2026-03-09 00:33:51.295486 | orchestrator | 2026-03-09 00:33:51.295491 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-09 00:33:51.295496 | orchestrator | Monday 09 March 2026 00:33:23 +0000 (0:00:00.040) 0:07:02.035 ********** 2026-03-09 00:33:51.295502 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:33:51.295507 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:33:51.295513 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:33:51.295518 | orchestrator | 2026-03-09 00:33:51.295523 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-03-09 00:33:51.295528 | orchestrator | Monday 09 March 2026 00:33:24 +0000 (0:00:01.282) 0:07:03.318 ********** 2026-03-09 00:33:51.295534 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:51.295540 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:51.295545 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:51.295569 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:51.295577 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:51.295586 | orchestrator | changed: [testbed-manager] 2026-03-09 00:33:51.295594 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:51.295603 | orchestrator | 2026-03-09 00:33:51.295611 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-03-09 00:33:51.295619 | orchestrator | Monday 09 March 2026 00:33:26 +0000 (0:00:02.256) 0:07:05.574 ********** 2026-03-09 00:33:51.295628 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:51.295635 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:51.295640 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:51.295645 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:51.295650 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:51.295656 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:51.295661 | orchestrator | changed: [testbed-manager] 2026-03-09 00:33:51.295666 | orchestrator | 2026-03-09 00:33:51.295671 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-03-09 00:33:51.295676 | orchestrator | Monday 09 March 2026 00:33:28 +0000 (0:00:01.442) 0:07:07.017 ********** 2026-03-09 00:33:51.295681 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:33:51.295686 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:51.295692 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:51.295697 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:51.295702 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:51.295707 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:51.295726 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:51.295732 | orchestrator | 2026-03-09 00:33:51.295737 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-03-09 00:33:51.295742 | orchestrator | Monday 09 March 2026 00:33:30 +0000 (0:00:02.384) 0:07:09.402 ********** 2026-03-09 00:33:51.295748 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:33:51.295753 | orchestrator | 2026-03-09 00:33:51.295758 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-03-09 00:33:51.295763 | orchestrator | Monday 09 March 2026 00:33:30 +0000 (0:00:00.105) 0:07:09.507 ********** 2026-03-09 00:33:51.295787 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:51.295793 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:51.295798 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:51.295803 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:51.295808 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:51.295813 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:51.295818 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:51.295823 | orchestrator | 2026-03-09 00:33:51.295829 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-03-09 00:33:51.295835 | orchestrator | Monday 09 March 2026 00:33:31 +0000 (0:00:01.056) 0:07:10.564 ********** 2026-03-09 00:33:51.295840 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:33:51.295845 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:33:51.295850 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:33:51.295855 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:33:51.295860 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:33:51.295865 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:33:51.295870 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:33:51.295875 | orchestrator | 2026-03-09 00:33:51.295880 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-03-09 00:33:51.295885 | orchestrator | Monday 09 March 2026 00:33:32 +0000 (0:00:00.588) 0:07:11.153 ********** 2026-03-09 00:33:51.295891 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-03-09 00:33:51.295899 | orchestrator | 2026-03-09 00:33:51.295904 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-03-09 00:33:51.295909 | orchestrator | Monday 09 March 2026 00:33:33 +0000 (0:00:01.213) 0:07:12.367 ********** 2026-03-09 00:33:51.295914 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:33:51.295919 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:33:51.295924 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:33:51.295929 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:33:51.295934 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:33:51.295939 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:33:51.295944 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:51.295950 | orchestrator | 2026-03-09 00:33:51.295955 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-03-09 00:33:51.295960 | orchestrator | Monday 09 March 2026 00:33:34 +0000 (0:00:00.899) 0:07:13.266 ********** 2026-03-09 00:33:51.295965 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-03-09 00:33:51.295983 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-03-09 00:33:51.295989 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-03-09 00:33:51.295994 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-03-09 00:33:51.295999 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-03-09 00:33:51.296004 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-03-09 00:33:51.296009 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-03-09 00:33:51.296014 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-03-09 00:33:51.296020 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-03-09 00:33:51.296025 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-03-09 00:33:51.296031 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-03-09 00:33:51.296036 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-03-09 00:33:51.296041 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-03-09 00:33:51.296046 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-03-09 00:33:51.296051 | orchestrator | 2026-03-09 00:33:51.296056 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-03-09 00:33:51.296066 | orchestrator | Monday 09 March 2026 00:33:37 +0000 (0:00:02.713) 0:07:15.980 ********** 2026-03-09 00:33:51.296071 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:33:51.296076 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:33:51.296081 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:33:51.296086 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:33:51.296091 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:33:51.296096 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:33:51.296101 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:33:51.296107 | orchestrator | 2026-03-09 00:33:51.296112 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-03-09 00:33:51.296117 | orchestrator | Monday 09 March 2026 00:33:37 +0000 (0:00:00.567) 0:07:16.547 ********** 2026-03-09 00:33:51.296123 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-03-09 00:33:51.296130 | orchestrator | 2026-03-09 00:33:51.296135 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-03-09 00:33:51.296140 | orchestrator | Monday 09 March 2026 00:33:38 +0000 (0:00:00.925) 0:07:17.472 ********** 2026-03-09 00:33:51.296145 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:33:51.296150 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:33:51.296155 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:33:51.296160 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:33:51.296166 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:33:51.296174 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:33:51.296179 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:51.296185 | orchestrator | 2026-03-09 00:33:51.296190 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-03-09 00:33:51.296195 | orchestrator | Monday 09 March 2026 00:33:39 +0000 (0:00:00.848) 0:07:18.321 ********** 2026-03-09 00:33:51.296200 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:33:51.296205 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:33:51.296210 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:33:51.296215 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:33:51.296220 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:33:51.296225 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:33:51.296230 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:51.296235 | orchestrator | 2026-03-09 00:33:51.296241 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-03-09 00:33:51.296246 | orchestrator | Monday 09 March 2026 00:33:40 +0000 (0:00:01.117) 0:07:19.438 ********** 2026-03-09 00:33:51.296251 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:33:51.296256 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:33:51.296261 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:33:51.296266 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:33:51.296271 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:33:51.296276 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:33:51.296281 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:33:51.296287 | orchestrator | 2026-03-09 00:33:51.296292 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-03-09 00:33:51.296297 | orchestrator | Monday 09 March 2026 00:33:41 +0000 (0:00:00.525) 0:07:19.964 ********** 2026-03-09 00:33:51.296302 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:33:51.296307 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:33:51.296312 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:33:51.296317 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:33:51.296322 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:33:51.296327 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:51.296332 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:33:51.296337 | orchestrator | 2026-03-09 00:33:51.296343 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-03-09 00:33:51.296348 | orchestrator | Monday 09 March 2026 00:33:42 +0000 (0:00:01.614) 0:07:21.579 ********** 2026-03-09 00:33:51.296357 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:33:51.296362 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:33:51.296367 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:33:51.296372 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:33:51.296377 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:33:51.296382 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:33:51.296387 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:33:51.296392 | orchestrator | 2026-03-09 00:33:51.296397 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-03-09 00:33:51.296402 | orchestrator | Monday 09 March 2026 00:33:43 +0000 (0:00:00.519) 0:07:22.098 ********** 2026-03-09 00:33:51.296408 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:51.296413 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:51.296418 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:51.296423 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:51.296428 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:51.296433 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:51.296441 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:34:24.941743 | orchestrator | 2026-03-09 00:34:24.941857 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-03-09 00:34:24.941886 | orchestrator | Monday 09 March 2026 00:33:51 +0000 (0:00:07.912) 0:07:30.011 ********** 2026-03-09 00:34:24.941907 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:34:24.941926 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:34:24.941945 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:34:24.941963 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:34:24.941981 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:34:24.942000 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:34:24.942093 | orchestrator | ok: [testbed-manager] 2026-03-09 00:34:24.942117 | orchestrator | 2026-03-09 00:34:24.942129 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-03-09 00:34:24.942140 | orchestrator | Monday 09 March 2026 00:33:52 +0000 (0:00:01.500) 0:07:31.511 ********** 2026-03-09 00:34:24.942152 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:34:24.942163 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:34:24.942174 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:34:24.942225 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:34:24.942238 | orchestrator | ok: [testbed-manager] 2026-03-09 00:34:24.942251 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:34:24.942264 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:34:24.942276 | orchestrator | 2026-03-09 00:34:24.942289 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-03-09 00:34:24.942302 | orchestrator | Monday 09 March 2026 00:33:54 +0000 (0:00:01.707) 0:07:33.218 ********** 2026-03-09 00:34:24.942315 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:34:24.942328 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:34:24.942340 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:34:24.942353 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:34:24.942366 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:34:24.942379 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:34:24.942392 | orchestrator | ok: [testbed-manager] 2026-03-09 00:34:24.942404 | orchestrator | 2026-03-09 00:34:24.942418 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-09 00:34:24.942431 | orchestrator | Monday 09 March 2026 00:33:56 +0000 (0:00:01.580) 0:07:34.799 ********** 2026-03-09 00:34:24.942444 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:34:24.942457 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:34:24.942470 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:34:24.942482 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:34:24.942494 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:34:24.942507 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:34:24.942519 | orchestrator | ok: [testbed-manager] 2026-03-09 00:34:24.942532 | orchestrator | 2026-03-09 00:34:24.942605 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-09 00:34:24.942647 | orchestrator | Monday 09 March 2026 00:33:57 +0000 (0:00:00.878) 0:07:35.678 ********** 2026-03-09 00:34:24.942658 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:34:24.942669 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:34:24.942680 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:34:24.942691 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:34:24.942703 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:34:24.942714 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:34:24.942725 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:34:24.942736 | orchestrator | 2026-03-09 00:34:24.942747 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-03-09 00:34:24.942758 | orchestrator | Monday 09 March 2026 00:33:58 +0000 (0:00:01.058) 0:07:36.737 ********** 2026-03-09 00:34:24.942769 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:34:24.942780 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:34:24.942791 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:34:24.942802 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:34:24.942812 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:34:24.942823 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:34:24.942834 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:34:24.942845 | orchestrator | 2026-03-09 00:34:24.942856 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-03-09 00:34:24.942867 | orchestrator | Monday 09 March 2026 00:33:58 +0000 (0:00:00.532) 0:07:37.269 ********** 2026-03-09 00:34:24.942878 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:34:24.942889 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:34:24.942900 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:34:24.942910 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:34:24.942921 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:34:24.942932 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:34:24.942943 | orchestrator | ok: [testbed-manager] 2026-03-09 00:34:24.942954 | orchestrator | 2026-03-09 00:34:24.942965 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-03-09 00:34:24.942976 | orchestrator | Monday 09 March 2026 00:33:59 +0000 (0:00:00.529) 0:07:37.798 ********** 2026-03-09 00:34:24.942986 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:34:24.942997 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:34:24.943008 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:34:24.943037 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:34:24.943048 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:34:24.943059 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:34:24.943069 | orchestrator | ok: [testbed-manager] 2026-03-09 00:34:24.943080 | orchestrator | 2026-03-09 00:34:24.943091 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-03-09 00:34:24.943102 | orchestrator | Monday 09 March 2026 00:33:59 +0000 (0:00:00.713) 0:07:38.512 ********** 2026-03-09 00:34:24.943112 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:34:24.943123 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:34:24.943134 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:34:24.943144 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:34:24.943155 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:34:24.943165 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:34:24.943176 | orchestrator | ok: [testbed-manager] 2026-03-09 00:34:24.943187 | orchestrator | 2026-03-09 00:34:24.943198 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-03-09 00:34:24.943209 | orchestrator | Monday 09 March 2026 00:34:00 +0000 (0:00:00.510) 0:07:39.022 ********** 2026-03-09 00:34:24.943219 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:34:24.943230 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:34:24.943241 | orchestrator | ok: [testbed-manager] 2026-03-09 00:34:24.943252 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:34:24.943263 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:34:24.943273 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:34:24.943284 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:34:24.943295 | orchestrator | 2026-03-09 00:34:24.943334 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-03-09 00:34:24.943346 | orchestrator | Monday 09 March 2026 00:34:06 +0000 (0:00:05.736) 0:07:44.759 ********** 2026-03-09 00:34:24.943357 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:34:24.943370 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:34:24.943389 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:34:24.943408 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:34:24.943427 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:34:24.943447 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:34:24.943466 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:34:24.943484 | orchestrator | 2026-03-09 00:34:24.943495 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-03-09 00:34:24.943506 | orchestrator | Monday 09 March 2026 00:34:06 +0000 (0:00:00.559) 0:07:45.318 ********** 2026-03-09 00:34:24.943519 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-03-09 00:34:24.943533 | orchestrator | 2026-03-09 00:34:24.943578 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-03-09 00:34:24.943590 | orchestrator | Monday 09 March 2026 00:34:07 +0000 (0:00:01.046) 0:07:46.365 ********** 2026-03-09 00:34:24.943601 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:34:24.943611 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:34:24.943622 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:34:24.943633 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:34:24.943643 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:34:24.943654 | orchestrator | ok: [testbed-manager] 2026-03-09 00:34:24.943664 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:34:24.943675 | orchestrator | 2026-03-09 00:34:24.943686 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-03-09 00:34:24.943697 | orchestrator | Monday 09 March 2026 00:34:09 +0000 (0:00:02.153) 0:07:48.519 ********** 2026-03-09 00:34:24.943707 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:34:24.943718 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:34:24.943729 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:34:24.943739 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:34:24.943750 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:34:24.943761 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:34:24.943771 | orchestrator | ok: [testbed-manager] 2026-03-09 00:34:24.943782 | orchestrator | 2026-03-09 00:34:24.943793 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-03-09 00:34:24.943803 | orchestrator | Monday 09 March 2026 00:34:10 +0000 (0:00:01.129) 0:07:49.648 ********** 2026-03-09 00:34:24.943814 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:34:24.943825 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:34:24.943835 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:34:24.943846 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:34:24.943856 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:34:24.943867 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:34:24.943878 | orchestrator | ok: [testbed-manager] 2026-03-09 00:34:24.943889 | orchestrator | 2026-03-09 00:34:24.943906 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-03-09 00:34:24.943917 | orchestrator | Monday 09 March 2026 00:34:11 +0000 (0:00:00.929) 0:07:50.577 ********** 2026-03-09 00:34:24.943929 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-09 00:34:24.943941 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-09 00:34:24.943952 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-09 00:34:24.943963 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-09 00:34:24.943982 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-09 00:34:24.943993 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-09 00:34:24.944003 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-09 00:34:24.944014 | orchestrator | 2026-03-09 00:34:24.944025 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-03-09 00:34:24.944035 | orchestrator | Monday 09 March 2026 00:34:13 +0000 (0:00:02.005) 0:07:52.582 ********** 2026-03-09 00:34:24.944046 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-03-09 00:34:24.944057 | orchestrator | 2026-03-09 00:34:24.944068 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-03-09 00:34:24.944078 | orchestrator | Monday 09 March 2026 00:34:14 +0000 (0:00:00.848) 0:07:53.431 ********** 2026-03-09 00:34:24.944089 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:34:24.944100 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:34:24.944111 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:34:24.944122 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:34:24.944133 | orchestrator | changed: [testbed-manager] 2026-03-09 00:34:24.944143 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:34:24.944154 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:34:24.944165 | orchestrator | 2026-03-09 00:34:24.944183 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-03-09 00:34:56.356271 | orchestrator | Monday 09 March 2026 00:34:24 +0000 (0:00:10.174) 0:08:03.605 ********** 2026-03-09 00:34:56.356386 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:34:56.356402 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:34:56.356413 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:34:56.356422 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:34:56.356432 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:34:56.356442 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:34:56.356452 | orchestrator | ok: [testbed-manager] 2026-03-09 00:34:56.356462 | orchestrator | 2026-03-09 00:34:56.356473 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-03-09 00:34:56.356483 | orchestrator | Monday 09 March 2026 00:34:26 +0000 (0:00:01.976) 0:08:05.582 ********** 2026-03-09 00:34:56.356492 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:34:56.356502 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:34:56.356512 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:34:56.356521 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:34:56.356531 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:34:56.356540 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:34:56.356602 | orchestrator | 2026-03-09 00:34:56.356613 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-03-09 00:34:56.356623 | orchestrator | Monday 09 March 2026 00:34:28 +0000 (0:00:01.420) 0:08:07.002 ********** 2026-03-09 00:34:56.356633 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:34:56.356644 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:34:56.356653 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:34:56.356663 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:34:56.356672 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:34:56.356682 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:34:56.356691 | orchestrator | changed: [testbed-manager] 2026-03-09 00:34:56.356701 | orchestrator | 2026-03-09 00:34:56.356711 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-03-09 00:34:56.356720 | orchestrator | 2026-03-09 00:34:56.356730 | orchestrator | TASK [Include hardening role] ************************************************** 2026-03-09 00:34:56.356768 | orchestrator | Monday 09 March 2026 00:34:29 +0000 (0:00:01.294) 0:08:08.296 ********** 2026-03-09 00:34:56.356787 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:34:56.356805 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:34:56.356824 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:34:56.356842 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:34:56.356859 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:34:56.356877 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:34:56.356895 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:34:56.356913 | orchestrator | 2026-03-09 00:34:56.356927 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-03-09 00:34:56.356938 | orchestrator | 2026-03-09 00:34:56.356948 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-03-09 00:34:56.356959 | orchestrator | Monday 09 March 2026 00:34:30 +0000 (0:00:00.708) 0:08:09.005 ********** 2026-03-09 00:34:56.356971 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:34:56.356983 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:34:56.356994 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:34:56.357005 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:34:56.357031 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:34:56.357042 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:34:56.357053 | orchestrator | changed: [testbed-manager] 2026-03-09 00:34:56.357064 | orchestrator | 2026-03-09 00:34:56.357076 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-03-09 00:34:56.357087 | orchestrator | Monday 09 March 2026 00:34:31 +0000 (0:00:01.310) 0:08:10.316 ********** 2026-03-09 00:34:56.357098 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:34:56.357107 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:34:56.357117 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:34:56.357126 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:34:56.357136 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:34:56.357145 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:34:56.357154 | orchestrator | ok: [testbed-manager] 2026-03-09 00:34:56.357164 | orchestrator | 2026-03-09 00:34:56.357173 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-03-09 00:34:56.357183 | orchestrator | Monday 09 March 2026 00:34:33 +0000 (0:00:01.429) 0:08:11.746 ********** 2026-03-09 00:34:56.357192 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:34:56.357205 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:34:56.357220 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:34:56.357245 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:34:56.357261 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:34:56.357276 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:34:56.357291 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:34:56.357306 | orchestrator | 2026-03-09 00:34:56.357320 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-03-09 00:34:56.357336 | orchestrator | Monday 09 March 2026 00:34:33 +0000 (0:00:00.538) 0:08:12.284 ********** 2026-03-09 00:34:56.357352 | orchestrator | included: osism.services.smartd for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-03-09 00:34:56.357368 | orchestrator | 2026-03-09 00:34:56.357385 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-03-09 00:34:56.357401 | orchestrator | Monday 09 March 2026 00:34:34 +0000 (0:00:01.017) 0:08:13.302 ********** 2026-03-09 00:34:56.357419 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-03-09 00:34:56.357437 | orchestrator | 2026-03-09 00:34:56.357452 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-03-09 00:34:56.357467 | orchestrator | Monday 09 March 2026 00:34:35 +0000 (0:00:00.790) 0:08:14.093 ********** 2026-03-09 00:34:56.357497 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:34:56.357514 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:34:56.357529 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:34:56.357544 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:34:56.357583 | orchestrator | changed: [testbed-manager] 2026-03-09 00:34:56.357599 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:34:56.357613 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:34:56.357629 | orchestrator | 2026-03-09 00:34:56.357672 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-03-09 00:34:56.357691 | orchestrator | Monday 09 March 2026 00:34:44 +0000 (0:00:09.439) 0:08:23.533 ********** 2026-03-09 00:34:56.357708 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:34:56.357724 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:34:56.357740 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:34:56.357755 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:34:56.357771 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:34:56.357786 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:34:56.357804 | orchestrator | changed: [testbed-manager] 2026-03-09 00:34:56.357820 | orchestrator | 2026-03-09 00:34:56.357836 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-03-09 00:34:56.357853 | orchestrator | Monday 09 March 2026 00:34:45 +0000 (0:00:00.869) 0:08:24.403 ********** 2026-03-09 00:34:56.357863 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:34:56.357872 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:34:56.357882 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:34:56.357891 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:34:56.357900 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:34:56.357910 | orchestrator | changed: [testbed-manager] 2026-03-09 00:34:56.357919 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:34:56.357928 | orchestrator | 2026-03-09 00:34:56.357938 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-03-09 00:34:56.357948 | orchestrator | Monday 09 March 2026 00:34:47 +0000 (0:00:01.338) 0:08:25.742 ********** 2026-03-09 00:34:56.357957 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:34:56.357966 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:34:56.357976 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:34:56.357985 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:34:56.357994 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:34:56.358003 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:34:56.358066 | orchestrator | changed: [testbed-manager] 2026-03-09 00:34:56.358089 | orchestrator | 2026-03-09 00:34:56.358104 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-03-09 00:34:56.358120 | orchestrator | Monday 09 March 2026 00:34:48 +0000 (0:00:01.906) 0:08:27.648 ********** 2026-03-09 00:34:56.358137 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:34:56.358156 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:34:56.358173 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:34:56.358192 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:34:56.358203 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:34:56.358212 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:34:56.358222 | orchestrator | changed: [testbed-manager] 2026-03-09 00:34:56.358231 | orchestrator | 2026-03-09 00:34:56.358241 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-03-09 00:34:56.358251 | orchestrator | Monday 09 March 2026 00:34:50 +0000 (0:00:01.258) 0:08:28.907 ********** 2026-03-09 00:34:56.358260 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:34:56.358269 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:34:56.358279 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:34:56.358288 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:34:56.358308 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:34:56.358317 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:34:56.358327 | orchestrator | changed: [testbed-manager] 2026-03-09 00:34:56.358336 | orchestrator | 2026-03-09 00:34:56.358357 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-03-09 00:34:56.358367 | orchestrator | 2026-03-09 00:34:56.358376 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-03-09 00:34:56.358386 | orchestrator | Monday 09 March 2026 00:34:51 +0000 (0:00:01.145) 0:08:30.053 ********** 2026-03-09 00:34:56.358395 | orchestrator | included: osism.commons.state for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-03-09 00:34:56.358405 | orchestrator | 2026-03-09 00:34:56.358415 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-09 00:34:56.358425 | orchestrator | Monday 09 March 2026 00:34:52 +0000 (0:00:00.816) 0:08:30.869 ********** 2026-03-09 00:34:56.358434 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:34:56.358443 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:34:56.358453 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:34:56.358462 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:34:56.358472 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:34:56.358481 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:34:56.358491 | orchestrator | ok: [testbed-manager] 2026-03-09 00:34:56.358500 | orchestrator | 2026-03-09 00:34:56.358510 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-09 00:34:56.358519 | orchestrator | Monday 09 March 2026 00:34:53 +0000 (0:00:01.062) 0:08:31.931 ********** 2026-03-09 00:34:56.358529 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:34:56.358538 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:34:56.358568 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:34:56.358578 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:34:56.358588 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:34:56.358597 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:34:56.358607 | orchestrator | changed: [testbed-manager] 2026-03-09 00:34:56.358616 | orchestrator | 2026-03-09 00:34:56.358626 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-03-09 00:34:56.358635 | orchestrator | Monday 09 March 2026 00:34:54 +0000 (0:00:01.220) 0:08:33.152 ********** 2026-03-09 00:34:56.358645 | orchestrator | included: osism.commons.state for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-03-09 00:34:56.358654 | orchestrator | 2026-03-09 00:34:56.358664 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-09 00:34:56.358673 | orchestrator | Monday 09 March 2026 00:34:55 +0000 (0:00:01.016) 0:08:34.169 ********** 2026-03-09 00:34:56.358683 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:34:56.358692 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:34:56.358706 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:34:56.358722 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:34:56.358739 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:34:56.358755 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:34:56.358771 | orchestrator | ok: [testbed-manager] 2026-03-09 00:34:56.358787 | orchestrator | 2026-03-09 00:34:56.358816 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-09 00:34:57.904958 | orchestrator | Monday 09 March 2026 00:34:56 +0000 (0:00:00.849) 0:08:35.018 ********** 2026-03-09 00:34:57.905060 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:34:57.905077 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:34:57.905088 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:34:57.905100 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:34:57.905112 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:34:57.905119 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:34:57.905125 | orchestrator | changed: [testbed-manager] 2026-03-09 00:34:57.905132 | orchestrator | 2026-03-09 00:34:57.905139 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:34:57.905146 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-09 00:34:57.905191 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-09 00:34:57.905203 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-09 00:34:57.905213 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-09 00:34:57.905224 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-03-09 00:34:57.905235 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-09 00:34:57.905245 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-09 00:34:57.905257 | orchestrator | 2026-03-09 00:34:57.905264 | orchestrator | 2026-03-09 00:34:57.905270 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:34:57.905276 | orchestrator | Monday 09 March 2026 00:34:57 +0000 (0:00:01.146) 0:08:36.164 ********** 2026-03-09 00:34:57.905282 | orchestrator | =============================================================================== 2026-03-09 00:34:57.905289 | orchestrator | osism.commons.packages : Install required packages --------------------- 88.78s 2026-03-09 00:34:57.905295 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 36.94s 2026-03-09 00:34:57.905313 | orchestrator | osism.commons.packages : Download required packages -------------------- 32.93s 2026-03-09 00:34:57.905320 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.97s 2026-03-09 00:34:57.905326 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 14.15s 2026-03-09 00:34:57.905333 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.86s 2026-03-09 00:34:57.905339 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.94s 2026-03-09 00:34:57.905345 | orchestrator | osism.services.lldpd : Install lldpd package --------------------------- 10.17s 2026-03-09 00:34:57.905351 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.76s 2026-03-09 00:34:57.905357 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.75s 2026-03-09 00:34:57.905364 | orchestrator | osism.services.rng : Install rng package -------------------------------- 9.52s 2026-03-09 00:34:57.905370 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.44s 2026-03-09 00:34:57.905376 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 9.05s 2026-03-09 00:34:57.905382 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.69s 2026-03-09 00:34:57.905388 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.57s 2026-03-09 00:34:57.905394 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.91s 2026-03-09 00:34:57.905400 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.68s 2026-03-09 00:34:57.905407 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.12s 2026-03-09 00:34:57.905413 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.81s 2026-03-09 00:34:57.905419 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.74s 2026-03-09 00:34:58.270449 | orchestrator | + osism apply fail2ban 2026-03-09 00:35:11.138185 | orchestrator | 2026-03-09 00:35:11 | INFO  | Prepare task for execution of fail2ban. 2026-03-09 00:35:11.224244 | orchestrator | 2026-03-09 00:35:11 | INFO  | Task 0b3b8ca1-786c-4682-885d-48f807567f0b (fail2ban) was prepared for execution. 2026-03-09 00:35:11.224343 | orchestrator | 2026-03-09 00:35:11 | INFO  | It takes a moment until task 0b3b8ca1-786c-4682-885d-48f807567f0b (fail2ban) has been started and output is visible here. 2026-03-09 00:35:32.932071 | orchestrator | 2026-03-09 00:35:32.932167 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-03-09 00:35:32.932179 | orchestrator | 2026-03-09 00:35:32.932188 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-03-09 00:35:32.932195 | orchestrator | Monday 09 March 2026 00:35:15 +0000 (0:00:00.283) 0:00:00.283 ********** 2026-03-09 00:35:32.932204 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:35:32.932213 | orchestrator | 2026-03-09 00:35:32.932221 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-03-09 00:35:32.932228 | orchestrator | Monday 09 March 2026 00:35:17 +0000 (0:00:01.144) 0:00:01.428 ********** 2026-03-09 00:35:32.932236 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:35:32.932244 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:35:32.932251 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:35:32.932259 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:35:32.932266 | orchestrator | changed: [testbed-manager] 2026-03-09 00:35:32.932273 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:35:32.932280 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:35:32.932287 | orchestrator | 2026-03-09 00:35:32.932294 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-03-09 00:35:32.932301 | orchestrator | Monday 09 March 2026 00:35:28 +0000 (0:00:10.948) 0:00:12.377 ********** 2026-03-09 00:35:32.932309 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:35:32.932316 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:35:32.932323 | orchestrator | changed: [testbed-manager] 2026-03-09 00:35:32.932330 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:35:32.932337 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:35:32.932344 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:35:32.932351 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:35:32.932359 | orchestrator | 2026-03-09 00:35:32.932366 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-03-09 00:35:32.932373 | orchestrator | Monday 09 March 2026 00:35:29 +0000 (0:00:01.453) 0:00:13.830 ********** 2026-03-09 00:35:32.932380 | orchestrator | ok: [testbed-manager] 2026-03-09 00:35:32.932388 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:35:32.932395 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:35:32.932402 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:35:32.932410 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:35:32.932417 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:35:32.932424 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:35:32.932431 | orchestrator | 2026-03-09 00:35:32.932438 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-03-09 00:35:32.932446 | orchestrator | Monday 09 March 2026 00:35:30 +0000 (0:00:01.482) 0:00:15.312 ********** 2026-03-09 00:35:32.932453 | orchestrator | changed: [testbed-manager] 2026-03-09 00:35:32.932461 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:35:32.932468 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:35:32.932475 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:35:32.932482 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:35:32.932489 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:35:32.932496 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:35:32.932504 | orchestrator | 2026-03-09 00:35:32.932511 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:35:32.932633 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:35:32.932646 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:35:32.932676 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:35:32.932685 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:35:32.932693 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:35:32.932702 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:35:32.932710 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:35:32.932718 | orchestrator | 2026-03-09 00:35:32.932726 | orchestrator | 2026-03-09 00:35:32.932734 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:35:32.932743 | orchestrator | Monday 09 March 2026 00:35:32 +0000 (0:00:01.649) 0:00:16.962 ********** 2026-03-09 00:35:32.932751 | orchestrator | =============================================================================== 2026-03-09 00:35:32.932760 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 10.95s 2026-03-09 00:35:32.932768 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.65s 2026-03-09 00:35:32.932776 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.48s 2026-03-09 00:35:32.932785 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.45s 2026-03-09 00:35:32.932793 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.14s 2026-03-09 00:35:33.266156 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-03-09 00:35:33.266255 | orchestrator | + osism apply network 2026-03-09 00:35:45.288309 | orchestrator | 2026-03-09 00:35:45 | INFO  | Prepare task for execution of network. 2026-03-09 00:35:45.358882 | orchestrator | 2026-03-09 00:35:45 | INFO  | Task 6d9c0157-9274-4f07-af81-d9574f11c4c4 (network) was prepared for execution. 2026-03-09 00:35:45.358971 | orchestrator | 2026-03-09 00:35:45 | INFO  | It takes a moment until task 6d9c0157-9274-4f07-af81-d9574f11c4c4 (network) has been started and output is visible here. 2026-03-09 00:36:14.667041 | orchestrator | 2026-03-09 00:36:14.668010 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-03-09 00:36:14.668050 | orchestrator | 2026-03-09 00:36:14.668067 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-03-09 00:36:14.668082 | orchestrator | Monday 09 March 2026 00:35:49 +0000 (0:00:00.257) 0:00:00.257 ********** 2026-03-09 00:36:14.668097 | orchestrator | ok: [testbed-manager] 2026-03-09 00:36:14.668114 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:36:14.668129 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:36:14.668144 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:36:14.668159 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:36:14.668174 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:36:14.668189 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:36:14.668203 | orchestrator | 2026-03-09 00:36:14.668219 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-03-09 00:36:14.668234 | orchestrator | Monday 09 March 2026 00:35:50 +0000 (0:00:00.772) 0:00:01.030 ********** 2026-03-09 00:36:14.668251 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:36:14.668269 | orchestrator | 2026-03-09 00:36:14.668284 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-03-09 00:36:14.668299 | orchestrator | Monday 09 March 2026 00:35:51 +0000 (0:00:01.270) 0:00:02.301 ********** 2026-03-09 00:36:14.668343 | orchestrator | ok: [testbed-manager] 2026-03-09 00:36:14.668358 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:36:14.668372 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:36:14.668386 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:36:14.668398 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:36:14.668412 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:36:14.668425 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:36:14.668439 | orchestrator | 2026-03-09 00:36:14.668448 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-03-09 00:36:14.668456 | orchestrator | Monday 09 March 2026 00:35:53 +0000 (0:00:02.060) 0:00:04.361 ********** 2026-03-09 00:36:14.668464 | orchestrator | ok: [testbed-manager] 2026-03-09 00:36:14.668472 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:36:14.668479 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:36:14.668487 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:36:14.668495 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:36:14.668502 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:36:14.668510 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:36:14.668518 | orchestrator | 2026-03-09 00:36:14.668525 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-03-09 00:36:14.668533 | orchestrator | Monday 09 March 2026 00:35:55 +0000 (0:00:01.774) 0:00:06.136 ********** 2026-03-09 00:36:14.668541 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-03-09 00:36:14.668550 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-03-09 00:36:14.668585 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-03-09 00:36:14.668599 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-03-09 00:36:14.668611 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-03-09 00:36:14.668624 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-03-09 00:36:14.668656 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-03-09 00:36:14.668669 | orchestrator | 2026-03-09 00:36:14.668681 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-03-09 00:36:14.668690 | orchestrator | Monday 09 March 2026 00:35:56 +0000 (0:00:00.990) 0:00:07.126 ********** 2026-03-09 00:36:14.668698 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-09 00:36:14.668707 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-09 00:36:14.668714 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-09 00:36:14.668723 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-09 00:36:14.668731 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 00:36:14.668738 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-09 00:36:14.668746 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-09 00:36:14.668754 | orchestrator | 2026-03-09 00:36:14.668762 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-03-09 00:36:14.668770 | orchestrator | Monday 09 March 2026 00:36:00 +0000 (0:00:03.582) 0:00:10.709 ********** 2026-03-09 00:36:14.668778 | orchestrator | changed: [testbed-manager] 2026-03-09 00:36:14.668786 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:36:14.668793 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:36:14.668801 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:36:14.668809 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:36:14.668816 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:36:14.668824 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:36:14.668832 | orchestrator | 2026-03-09 00:36:14.668840 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-03-09 00:36:14.668848 | orchestrator | Monday 09 March 2026 00:36:01 +0000 (0:00:01.570) 0:00:12.279 ********** 2026-03-09 00:36:14.668855 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-09 00:36:14.668863 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 00:36:14.668871 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-09 00:36:14.668879 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-09 00:36:14.668886 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-09 00:36:14.668894 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-09 00:36:14.668910 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-09 00:36:14.668918 | orchestrator | 2026-03-09 00:36:14.668926 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-03-09 00:36:14.668934 | orchestrator | Monday 09 March 2026 00:36:03 +0000 (0:00:01.789) 0:00:14.069 ********** 2026-03-09 00:36:14.668942 | orchestrator | ok: [testbed-manager] 2026-03-09 00:36:14.668949 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:36:14.668957 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:36:14.668965 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:36:14.668973 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:36:14.668981 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:36:14.668989 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:36:14.668996 | orchestrator | 2026-03-09 00:36:14.669004 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-03-09 00:36:14.669032 | orchestrator | Monday 09 March 2026 00:36:04 +0000 (0:00:01.206) 0:00:15.275 ********** 2026-03-09 00:36:14.669041 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:36:14.669048 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:36:14.669056 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:36:14.669064 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:36:14.669072 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:36:14.669080 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:36:14.669087 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:36:14.669095 | orchestrator | 2026-03-09 00:36:14.669103 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-03-09 00:36:14.669111 | orchestrator | Monday 09 March 2026 00:36:05 +0000 (0:00:00.673) 0:00:15.948 ********** 2026-03-09 00:36:14.669119 | orchestrator | ok: [testbed-manager] 2026-03-09 00:36:14.669126 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:36:14.669134 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:36:14.669142 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:36:14.669150 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:36:14.669157 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:36:14.669165 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:36:14.669173 | orchestrator | 2026-03-09 00:36:14.669181 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-03-09 00:36:14.669189 | orchestrator | Monday 09 March 2026 00:36:07 +0000 (0:00:02.311) 0:00:18.260 ********** 2026-03-09 00:36:14.669196 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:36:14.669204 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:36:14.669212 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:36:14.669220 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:36:14.669228 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:36:14.669235 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:36:14.669292 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-03-09 00:36:14.669302 | orchestrator | 2026-03-09 00:36:14.669310 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-03-09 00:36:14.669318 | orchestrator | Monday 09 March 2026 00:36:08 +0000 (0:00:00.966) 0:00:19.227 ********** 2026-03-09 00:36:14.669326 | orchestrator | ok: [testbed-manager] 2026-03-09 00:36:14.669334 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:36:14.669342 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:36:14.669350 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:36:14.669358 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:36:14.669366 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:36:14.669373 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:36:14.669381 | orchestrator | 2026-03-09 00:36:14.669389 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-03-09 00:36:14.669397 | orchestrator | Monday 09 March 2026 00:36:10 +0000 (0:00:01.658) 0:00:20.886 ********** 2026-03-09 00:36:14.669411 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:36:14.669427 | orchestrator | 2026-03-09 00:36:14.669435 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-09 00:36:14.669467 | orchestrator | Monday 09 March 2026 00:36:11 +0000 (0:00:01.295) 0:00:22.181 ********** 2026-03-09 00:36:14.669477 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:36:14.669485 | orchestrator | ok: [testbed-manager] 2026-03-09 00:36:14.669493 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:36:14.669501 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:36:14.669509 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:36:14.669517 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:36:14.669525 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:36:14.669533 | orchestrator | 2026-03-09 00:36:14.669541 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-03-09 00:36:14.669549 | orchestrator | Monday 09 March 2026 00:36:12 +0000 (0:00:00.936) 0:00:23.118 ********** 2026-03-09 00:36:14.669578 | orchestrator | ok: [testbed-manager] 2026-03-09 00:36:14.669593 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:36:14.669606 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:36:14.669617 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:36:14.669629 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:36:14.669638 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:36:14.669645 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:36:14.669653 | orchestrator | 2026-03-09 00:36:14.669661 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-09 00:36:14.669669 | orchestrator | Monday 09 March 2026 00:36:13 +0000 (0:00:00.856) 0:00:23.974 ********** 2026-03-09 00:36:14.669677 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-03-09 00:36:14.669685 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-03-09 00:36:14.669693 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-03-09 00:36:14.669701 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-03-09 00:36:14.669709 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-09 00:36:14.669717 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-03-09 00:36:14.669725 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-09 00:36:14.669733 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-03-09 00:36:14.669740 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-09 00:36:14.669748 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-09 00:36:14.669756 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-09 00:36:14.669764 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-03-09 00:36:14.669772 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-09 00:36:14.669780 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-09 00:36:14.669787 | orchestrator | 2026-03-09 00:36:14.669802 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-03-09 00:36:30.501551 | orchestrator | Monday 09 March 2026 00:36:14 +0000 (0:00:01.250) 0:00:25.224 ********** 2026-03-09 00:36:30.501710 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:36:30.501732 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:36:30.501751 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:36:30.501769 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:36:30.501786 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:36:30.501802 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:36:30.501818 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:36:30.501834 | orchestrator | 2026-03-09 00:36:30.501854 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-03-09 00:36:30.501902 | orchestrator | Monday 09 March 2026 00:36:15 +0000 (0:00:00.655) 0:00:25.880 ********** 2026-03-09 00:36:30.501922 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-4, testbed-node-5, testbed-node-2, testbed-node-3 2026-03-09 00:36:30.501943 | orchestrator | 2026-03-09 00:36:30.501961 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-03-09 00:36:30.501979 | orchestrator | Monday 09 March 2026 00:36:19 +0000 (0:00:04.624) 0:00:30.505 ********** 2026-03-09 00:36:30.501999 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:36:30.502090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:36:30.502113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:36:30.502151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:36:30.502172 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:36:30.502191 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:36:30.502210 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:36:30.502231 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:36:30.502249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:36:30.502275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:36:30.502295 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:36:30.502338 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:36:30.502374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:36:30.502396 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:36:30.502415 | orchestrator | 2026-03-09 00:36:30.502435 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-03-09 00:36:30.502454 | orchestrator | Monday 09 March 2026 00:36:25 +0000 (0:00:05.361) 0:00:35.866 ********** 2026-03-09 00:36:30.502474 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:36:30.502494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:36:30.502512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:36:30.502530 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:36:30.502633 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:36:30.502658 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:36:30.502675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:36:30.502692 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:36:30.502710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:36:30.502730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:36:30.502748 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:36:30.502781 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:36:30.502822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:36:44.486505 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:36:44.486595 | orchestrator | 2026-03-09 00:36:44.486604 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-03-09 00:36:44.486610 | orchestrator | Monday 09 March 2026 00:36:30 +0000 (0:00:05.322) 0:00:41.189 ********** 2026-03-09 00:36:44.486616 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:36:44.486620 | orchestrator | 2026-03-09 00:36:44.486624 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-09 00:36:44.486629 | orchestrator | Monday 09 March 2026 00:36:32 +0000 (0:00:01.392) 0:00:42.581 ********** 2026-03-09 00:36:44.486633 | orchestrator | ok: [testbed-manager] 2026-03-09 00:36:44.486638 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:36:44.486642 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:36:44.486645 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:36:44.486649 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:36:44.486653 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:36:44.486657 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:36:44.486660 | orchestrator | 2026-03-09 00:36:44.486664 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-09 00:36:44.486668 | orchestrator | Monday 09 March 2026 00:36:33 +0000 (0:00:01.228) 0:00:43.809 ********** 2026-03-09 00:36:44.486672 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-09 00:36:44.486676 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-09 00:36:44.486680 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-09 00:36:44.486684 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-09 00:36:44.486688 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-09 00:36:44.486703 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-09 00:36:44.486708 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-09 00:36:44.486711 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-09 00:36:44.486715 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:36:44.486720 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-09 00:36:44.486723 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-09 00:36:44.486727 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-09 00:36:44.486734 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-09 00:36:44.486740 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:36:44.486749 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-09 00:36:44.486775 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-09 00:36:44.486782 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-09 00:36:44.486789 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-09 00:36:44.486795 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:36:44.486801 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-09 00:36:44.486808 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-09 00:36:44.486813 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-09 00:36:44.486817 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-09 00:36:44.486821 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:36:44.486825 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-09 00:36:44.486828 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-09 00:36:44.486832 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-09 00:36:44.486836 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-09 00:36:44.486840 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:36:44.486843 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:36:44.486847 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-09 00:36:44.486851 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-09 00:36:44.486855 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-09 00:36:44.486859 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-09 00:36:44.486862 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:36:44.486866 | orchestrator | 2026-03-09 00:36:44.486870 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-03-09 00:36:44.486884 | orchestrator | Monday 09 March 2026 00:36:34 +0000 (0:00:00.979) 0:00:44.788 ********** 2026-03-09 00:36:44.486889 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:36:44.486892 | orchestrator | 2026-03-09 00:36:44.486896 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-03-09 00:36:44.486900 | orchestrator | Monday 09 March 2026 00:36:35 +0000 (0:00:01.306) 0:00:46.094 ********** 2026-03-09 00:36:44.486904 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:36:44.486908 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:36:44.486911 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:36:44.486915 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:36:44.486919 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:36:44.486922 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:36:44.486926 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:36:44.486930 | orchestrator | 2026-03-09 00:36:44.486934 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-03-09 00:36:44.486937 | orchestrator | Monday 09 March 2026 00:36:36 +0000 (0:00:00.676) 0:00:46.771 ********** 2026-03-09 00:36:44.486941 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:36:44.486945 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:36:44.486949 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:36:44.486952 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:36:44.486956 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:36:44.486960 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:36:44.486963 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:36:44.486972 | orchestrator | 2026-03-09 00:36:44.486975 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-03-09 00:36:44.486979 | orchestrator | Monday 09 March 2026 00:36:37 +0000 (0:00:00.827) 0:00:47.599 ********** 2026-03-09 00:36:44.486983 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:36:44.486987 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:36:44.486990 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:36:44.486994 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:36:44.486998 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:36:44.487001 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:36:44.487005 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:36:44.487009 | orchestrator | 2026-03-09 00:36:44.487012 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-03-09 00:36:44.487016 | orchestrator | Monday 09 March 2026 00:36:37 +0000 (0:00:00.647) 0:00:48.246 ********** 2026-03-09 00:36:44.487024 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:36:44.487028 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:36:44.487031 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:36:44.487035 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:36:44.487039 | orchestrator | ok: [testbed-manager] 2026-03-09 00:36:44.487042 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:36:44.487046 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:36:44.487050 | orchestrator | 2026-03-09 00:36:44.487054 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-03-09 00:36:44.487057 | orchestrator | Monday 09 March 2026 00:36:39 +0000 (0:00:01.771) 0:00:50.018 ********** 2026-03-09 00:36:44.487061 | orchestrator | ok: [testbed-manager] 2026-03-09 00:36:44.487065 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:36:44.487069 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:36:44.487072 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:36:44.487076 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:36:44.487080 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:36:44.487085 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:36:44.487089 | orchestrator | 2026-03-09 00:36:44.487093 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-03-09 00:36:44.487098 | orchestrator | Monday 09 March 2026 00:36:40 +0000 (0:00:01.162) 0:00:51.180 ********** 2026-03-09 00:36:44.487103 | orchestrator | ok: [testbed-manager] 2026-03-09 00:36:44.487107 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:36:44.487112 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:36:44.487116 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:36:44.487120 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:36:44.487125 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:36:44.487129 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:36:44.487134 | orchestrator | 2026-03-09 00:36:44.487138 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-03-09 00:36:44.487142 | orchestrator | Monday 09 March 2026 00:36:43 +0000 (0:00:02.441) 0:00:53.622 ********** 2026-03-09 00:36:44.487147 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:36:44.487151 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:36:44.487156 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:36:44.487160 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:36:44.487165 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:36:44.487169 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:36:44.487174 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:36:44.487178 | orchestrator | 2026-03-09 00:36:44.487183 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-03-09 00:36:44.487188 | orchestrator | Monday 09 March 2026 00:36:43 +0000 (0:00:00.856) 0:00:54.478 ********** 2026-03-09 00:36:44.487192 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:36:44.487196 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:36:44.487199 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:36:44.487203 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:36:44.487207 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:36:44.487211 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:36:44.487218 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:36:44.487222 | orchestrator | 2026-03-09 00:36:44.487225 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:36:44.487230 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-09 00:36:44.487235 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-09 00:36:44.487242 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-09 00:36:44.869337 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-09 00:36:44.869425 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-09 00:36:44.869436 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-09 00:36:44.869445 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-09 00:36:44.869453 | orchestrator | 2026-03-09 00:36:44.869462 | orchestrator | 2026-03-09 00:36:44.869470 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:36:44.869480 | orchestrator | Monday 09 March 2026 00:36:44 +0000 (0:00:00.567) 0:00:55.045 ********** 2026-03-09 00:36:44.869488 | orchestrator | =============================================================================== 2026-03-09 00:36:44.869495 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.36s 2026-03-09 00:36:44.869503 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.32s 2026-03-09 00:36:44.869511 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.62s 2026-03-09 00:36:44.869519 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.58s 2026-03-09 00:36:44.869527 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.44s 2026-03-09 00:36:44.869535 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.31s 2026-03-09 00:36:44.869542 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.06s 2026-03-09 00:36:44.869550 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.79s 2026-03-09 00:36:44.869588 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.77s 2026-03-09 00:36:44.869597 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.77s 2026-03-09 00:36:44.869605 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.66s 2026-03-09 00:36:44.869613 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.57s 2026-03-09 00:36:44.869621 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.39s 2026-03-09 00:36:44.869629 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.31s 2026-03-09 00:36:44.869637 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.30s 2026-03-09 00:36:44.869645 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.27s 2026-03-09 00:36:44.869652 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.25s 2026-03-09 00:36:44.869660 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.23s 2026-03-09 00:36:44.869668 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.21s 2026-03-09 00:36:44.869676 | orchestrator | osism.commons.network : Remove network-extra-init systemd service ------- 1.16s 2026-03-09 00:36:45.242331 | orchestrator | + osism apply wireguard 2026-03-09 00:36:57.348269 | orchestrator | 2026-03-09 00:36:57 | INFO  | Prepare task for execution of wireguard. 2026-03-09 00:36:57.422969 | orchestrator | 2026-03-09 00:36:57 | INFO  | Task b345e4a3-57b8-40f7-bc05-b4980d2c79a4 (wireguard) was prepared for execution. 2026-03-09 00:36:57.423075 | orchestrator | 2026-03-09 00:36:57 | INFO  | It takes a moment until task b345e4a3-57b8-40f7-bc05-b4980d2c79a4 (wireguard) has been started and output is visible here. 2026-03-09 00:37:18.305379 | orchestrator | 2026-03-09 00:37:18.305508 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-03-09 00:37:18.305533 | orchestrator | 2026-03-09 00:37:18.305552 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-03-09 00:37:18.305639 | orchestrator | Monday 09 March 2026 00:37:01 +0000 (0:00:00.233) 0:00:00.233 ********** 2026-03-09 00:37:18.305659 | orchestrator | ok: [testbed-manager] 2026-03-09 00:37:18.305677 | orchestrator | 2026-03-09 00:37:18.305695 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-03-09 00:37:18.305713 | orchestrator | Monday 09 March 2026 00:37:03 +0000 (0:00:01.551) 0:00:01.785 ********** 2026-03-09 00:37:18.305731 | orchestrator | changed: [testbed-manager] 2026-03-09 00:37:18.305750 | orchestrator | 2026-03-09 00:37:18.305769 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-03-09 00:37:18.305789 | orchestrator | Monday 09 March 2026 00:37:10 +0000 (0:00:06.934) 0:00:08.719 ********** 2026-03-09 00:37:18.305807 | orchestrator | changed: [testbed-manager] 2026-03-09 00:37:18.305826 | orchestrator | 2026-03-09 00:37:18.305839 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-03-09 00:37:18.305850 | orchestrator | Monday 09 March 2026 00:37:10 +0000 (0:00:00.580) 0:00:09.299 ********** 2026-03-09 00:37:18.305861 | orchestrator | changed: [testbed-manager] 2026-03-09 00:37:18.305872 | orchestrator | 2026-03-09 00:37:18.305883 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-03-09 00:37:18.305896 | orchestrator | Monday 09 March 2026 00:37:11 +0000 (0:00:00.470) 0:00:09.770 ********** 2026-03-09 00:37:18.305932 | orchestrator | ok: [testbed-manager] 2026-03-09 00:37:18.305947 | orchestrator | 2026-03-09 00:37:18.305966 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-03-09 00:37:18.305992 | orchestrator | Monday 09 March 2026 00:37:11 +0000 (0:00:00.712) 0:00:10.483 ********** 2026-03-09 00:37:18.306013 | orchestrator | ok: [testbed-manager] 2026-03-09 00:37:18.306107 | orchestrator | 2026-03-09 00:37:18.306126 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-03-09 00:37:18.306148 | orchestrator | Monday 09 March 2026 00:37:12 +0000 (0:00:00.419) 0:00:10.903 ********** 2026-03-09 00:37:18.306168 | orchestrator | ok: [testbed-manager] 2026-03-09 00:37:18.306184 | orchestrator | 2026-03-09 00:37:18.306199 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-03-09 00:37:18.306213 | orchestrator | Monday 09 March 2026 00:37:12 +0000 (0:00:00.421) 0:00:11.325 ********** 2026-03-09 00:37:18.306227 | orchestrator | changed: [testbed-manager] 2026-03-09 00:37:18.306240 | orchestrator | 2026-03-09 00:37:18.306253 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-03-09 00:37:18.306266 | orchestrator | Monday 09 March 2026 00:37:14 +0000 (0:00:01.207) 0:00:12.532 ********** 2026-03-09 00:37:18.306279 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-09 00:37:18.306290 | orchestrator | changed: [testbed-manager] 2026-03-09 00:37:18.306301 | orchestrator | 2026-03-09 00:37:18.306312 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-03-09 00:37:18.306323 | orchestrator | Monday 09 March 2026 00:37:15 +0000 (0:00:00.962) 0:00:13.495 ********** 2026-03-09 00:37:18.306334 | orchestrator | changed: [testbed-manager] 2026-03-09 00:37:18.306364 | orchestrator | 2026-03-09 00:37:18.306376 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-03-09 00:37:18.306414 | orchestrator | Monday 09 March 2026 00:37:16 +0000 (0:00:01.751) 0:00:15.246 ********** 2026-03-09 00:37:18.306427 | orchestrator | changed: [testbed-manager] 2026-03-09 00:37:18.306437 | orchestrator | 2026-03-09 00:37:18.306448 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:37:18.306459 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:37:18.306472 | orchestrator | 2026-03-09 00:37:18.306482 | orchestrator | 2026-03-09 00:37:18.306493 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:37:18.306504 | orchestrator | Monday 09 March 2026 00:37:17 +0000 (0:00:01.097) 0:00:16.344 ********** 2026-03-09 00:37:18.306515 | orchestrator | =============================================================================== 2026-03-09 00:37:18.306533 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.93s 2026-03-09 00:37:18.306544 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.75s 2026-03-09 00:37:18.306555 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.55s 2026-03-09 00:37:18.306602 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.21s 2026-03-09 00:37:18.306613 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 1.10s 2026-03-09 00:37:18.306626 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.96s 2026-03-09 00:37:18.306644 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.71s 2026-03-09 00:37:18.306662 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.58s 2026-03-09 00:37:18.306680 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.47s 2026-03-09 00:37:18.306698 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2026-03-09 00:37:18.306717 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.42s 2026-03-09 00:37:18.707192 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-03-09 00:37:18.737161 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-09 00:37:18.737258 | orchestrator | Dload Upload Total Spent Left Speed 2026-03-09 00:37:18.813869 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 196 0 --:--:-- --:--:-- --:--:-- 197 2026-03-09 00:37:18.827614 | orchestrator | + osism apply --environment custom workarounds 2026-03-09 00:37:21.164472 | orchestrator | 2026-03-09 00:37:21 | INFO  | Trying to run play workarounds in environment custom 2026-03-09 00:37:31.203855 | orchestrator | 2026-03-09 00:37:31 | INFO  | Prepare task for execution of workarounds. 2026-03-09 00:37:31.277461 | orchestrator | 2026-03-09 00:37:31 | INFO  | Task 129d3633-3b6d-46d9-8e27-aaa3e9a26dd5 (workarounds) was prepared for execution. 2026-03-09 00:37:31.277607 | orchestrator | 2026-03-09 00:37:31 | INFO  | It takes a moment until task 129d3633-3b6d-46d9-8e27-aaa3e9a26dd5 (workarounds) has been started and output is visible here. 2026-03-09 00:37:57.457067 | orchestrator | 2026-03-09 00:37:57.457189 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 00:37:57.457206 | orchestrator | 2026-03-09 00:37:57.457219 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-03-09 00:37:57.457230 | orchestrator | Monday 09 March 2026 00:37:35 +0000 (0:00:00.140) 0:00:00.140 ********** 2026-03-09 00:37:57.457241 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-03-09 00:37:57.457253 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-03-09 00:37:57.457264 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-03-09 00:37:57.457275 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-03-09 00:37:57.457309 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-03-09 00:37:57.457321 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-03-09 00:37:57.457332 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-03-09 00:37:57.457343 | orchestrator | 2026-03-09 00:37:57.457354 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-03-09 00:37:57.457366 | orchestrator | 2026-03-09 00:37:57.457385 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-09 00:37:57.457404 | orchestrator | Monday 09 March 2026 00:37:36 +0000 (0:00:00.810) 0:00:00.951 ********** 2026-03-09 00:37:57.457423 | orchestrator | ok: [testbed-manager] 2026-03-09 00:37:57.457442 | orchestrator | 2026-03-09 00:37:57.457459 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-03-09 00:37:57.457477 | orchestrator | 2026-03-09 00:37:57.457495 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-09 00:37:57.457512 | orchestrator | Monday 09 March 2026 00:37:38 +0000 (0:00:02.344) 0:00:03.295 ********** 2026-03-09 00:37:57.457530 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:37:57.457547 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:37:57.457663 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:37:57.457682 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:37:57.457701 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:37:57.457723 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:37:57.457747 | orchestrator | 2026-03-09 00:37:57.457770 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-03-09 00:37:57.457792 | orchestrator | 2026-03-09 00:37:57.457815 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-03-09 00:37:57.457838 | orchestrator | Monday 09 March 2026 00:37:40 +0000 (0:00:01.804) 0:00:05.100 ********** 2026-03-09 00:37:57.457862 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-09 00:37:57.457887 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-09 00:37:57.457911 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-09 00:37:57.457936 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-09 00:37:57.457955 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-09 00:37:57.457990 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-09 00:37:57.458008 | orchestrator | 2026-03-09 00:37:57.458092 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-03-09 00:37:57.458114 | orchestrator | Monday 09 March 2026 00:37:42 +0000 (0:00:01.480) 0:00:06.580 ********** 2026-03-09 00:37:57.458132 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:37:57.458151 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:37:57.458170 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:37:57.458188 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:37:57.458209 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:37:57.458230 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:37:57.458251 | orchestrator | 2026-03-09 00:37:57.458271 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-03-09 00:37:57.458292 | orchestrator | Monday 09 March 2026 00:37:45 +0000 (0:00:03.927) 0:00:10.507 ********** 2026-03-09 00:37:57.458312 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:37:57.458331 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:37:57.458349 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:37:57.458368 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:37:57.458386 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:37:57.458404 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:37:57.458447 | orchestrator | 2026-03-09 00:37:57.458466 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-03-09 00:37:57.458485 | orchestrator | 2026-03-09 00:37:57.458504 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-03-09 00:37:57.458521 | orchestrator | Monday 09 March 2026 00:37:46 +0000 (0:00:00.719) 0:00:11.227 ********** 2026-03-09 00:37:57.458540 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:37:57.458590 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:37:57.458609 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:37:57.458627 | orchestrator | changed: [testbed-manager] 2026-03-09 00:37:57.458646 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:37:57.458665 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:37:57.458682 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:37:57.458701 | orchestrator | 2026-03-09 00:37:57.458713 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-03-09 00:37:57.458724 | orchestrator | Monday 09 March 2026 00:37:48 +0000 (0:00:01.630) 0:00:12.858 ********** 2026-03-09 00:37:57.458735 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:37:57.458745 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:37:57.458756 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:37:57.458766 | orchestrator | changed: [testbed-manager] 2026-03-09 00:37:57.458777 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:37:57.458787 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:37:57.458822 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:37:57.458834 | orchestrator | 2026-03-09 00:37:57.458844 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-03-09 00:37:57.458855 | orchestrator | Monday 09 March 2026 00:37:49 +0000 (0:00:01.654) 0:00:14.512 ********** 2026-03-09 00:37:57.458866 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:37:57.458877 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:37:57.458888 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:37:57.458898 | orchestrator | ok: [testbed-manager] 2026-03-09 00:37:57.458909 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:37:57.458919 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:37:57.458930 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:37:57.458941 | orchestrator | 2026-03-09 00:37:57.458951 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-03-09 00:37:57.458962 | orchestrator | Monday 09 March 2026 00:37:51 +0000 (0:00:01.573) 0:00:16.086 ********** 2026-03-09 00:37:57.458973 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:37:57.458984 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:37:57.458994 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:37:57.459005 | orchestrator | changed: [testbed-manager] 2026-03-09 00:37:57.459015 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:37:57.459026 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:37:57.459037 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:37:57.459047 | orchestrator | 2026-03-09 00:37:57.459058 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-03-09 00:37:57.459069 | orchestrator | Monday 09 March 2026 00:37:53 +0000 (0:00:01.807) 0:00:17.893 ********** 2026-03-09 00:37:57.459079 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:37:57.459090 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:37:57.459100 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:37:57.459111 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:37:57.459121 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:37:57.459132 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:37:57.459142 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:37:57.459153 | orchestrator | 2026-03-09 00:37:57.459164 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-03-09 00:37:57.459174 | orchestrator | 2026-03-09 00:37:57.459185 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-03-09 00:37:57.459196 | orchestrator | Monday 09 March 2026 00:37:53 +0000 (0:00:00.655) 0:00:18.549 ********** 2026-03-09 00:37:57.459206 | orchestrator | ok: [testbed-manager] 2026-03-09 00:37:57.459229 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:37:57.459239 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:37:57.459250 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:37:57.459260 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:37:57.459271 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:37:57.459281 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:37:57.459292 | orchestrator | 2026-03-09 00:37:57.459303 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:37:57.459316 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:37:57.459328 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:37:57.459339 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:37:57.459360 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:37:57.459371 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:37:57.459382 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:37:57.459393 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:37:57.459404 | orchestrator | 2026-03-09 00:37:57.459414 | orchestrator | 2026-03-09 00:37:57.459425 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:37:57.459436 | orchestrator | Monday 09 March 2026 00:37:57 +0000 (0:00:03.448) 0:00:21.998 ********** 2026-03-09 00:37:57.459447 | orchestrator | =============================================================================== 2026-03-09 00:37:57.459457 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.93s 2026-03-09 00:37:57.459468 | orchestrator | Install python3-docker -------------------------------------------------- 3.45s 2026-03-09 00:37:57.459478 | orchestrator | Apply netplan configuration --------------------------------------------- 2.34s 2026-03-09 00:37:57.459489 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.81s 2026-03-09 00:37:57.459499 | orchestrator | Apply netplan configuration --------------------------------------------- 1.80s 2026-03-09 00:37:57.459510 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.65s 2026-03-09 00:37:57.459520 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.63s 2026-03-09 00:37:57.459531 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.57s 2026-03-09 00:37:57.459542 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.48s 2026-03-09 00:37:57.459573 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.81s 2026-03-09 00:37:57.459585 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.72s 2026-03-09 00:37:57.459603 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.66s 2026-03-09 00:37:58.104670 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-03-09 00:38:10.257038 | orchestrator | 2026-03-09 00:38:10 | INFO  | Prepare task for execution of reboot. 2026-03-09 00:38:10.338358 | orchestrator | 2026-03-09 00:38:10 | INFO  | Task 5092553f-9701-42a2-9e51-f19dbad87790 (reboot) was prepared for execution. 2026-03-09 00:38:10.338468 | orchestrator | 2026-03-09 00:38:10 | INFO  | It takes a moment until task 5092553f-9701-42a2-9e51-f19dbad87790 (reboot) has been started and output is visible here. 2026-03-09 00:38:21.085945 | orchestrator | 2026-03-09 00:38:21.086107 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-09 00:38:21.086127 | orchestrator | 2026-03-09 00:38:21.086139 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-09 00:38:21.086151 | orchestrator | Monday 09 March 2026 00:38:14 +0000 (0:00:00.232) 0:00:00.232 ********** 2026-03-09 00:38:21.086162 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:38:21.086174 | orchestrator | 2026-03-09 00:38:21.086185 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-09 00:38:21.086196 | orchestrator | Monday 09 March 2026 00:38:15 +0000 (0:00:00.136) 0:00:00.369 ********** 2026-03-09 00:38:21.086207 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:38:21.086217 | orchestrator | 2026-03-09 00:38:21.086228 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-09 00:38:21.086239 | orchestrator | Monday 09 March 2026 00:38:16 +0000 (0:00:00.975) 0:00:01.345 ********** 2026-03-09 00:38:21.086250 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:38:21.086261 | orchestrator | 2026-03-09 00:38:21.086271 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-09 00:38:21.086282 | orchestrator | 2026-03-09 00:38:21.086293 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-09 00:38:21.086304 | orchestrator | Monday 09 March 2026 00:38:16 +0000 (0:00:00.125) 0:00:01.471 ********** 2026-03-09 00:38:21.086314 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:38:21.086325 | orchestrator | 2026-03-09 00:38:21.086336 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-09 00:38:21.086347 | orchestrator | Monday 09 March 2026 00:38:16 +0000 (0:00:00.121) 0:00:01.592 ********** 2026-03-09 00:38:21.086358 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:38:21.086369 | orchestrator | 2026-03-09 00:38:21.086380 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-09 00:38:21.086391 | orchestrator | Monday 09 March 2026 00:38:16 +0000 (0:00:00.664) 0:00:02.257 ********** 2026-03-09 00:38:21.086402 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:38:21.086413 | orchestrator | 2026-03-09 00:38:21.086423 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-09 00:38:21.086434 | orchestrator | 2026-03-09 00:38:21.086445 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-09 00:38:21.086456 | orchestrator | Monday 09 March 2026 00:38:17 +0000 (0:00:00.133) 0:00:02.390 ********** 2026-03-09 00:38:21.086466 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:38:21.086479 | orchestrator | 2026-03-09 00:38:21.086493 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-09 00:38:21.086522 | orchestrator | Monday 09 March 2026 00:38:17 +0000 (0:00:00.211) 0:00:02.601 ********** 2026-03-09 00:38:21.086536 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:38:21.086572 | orchestrator | 2026-03-09 00:38:21.086585 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-09 00:38:21.086599 | orchestrator | Monday 09 March 2026 00:38:17 +0000 (0:00:00.697) 0:00:03.299 ********** 2026-03-09 00:38:21.086612 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:38:21.086624 | orchestrator | 2026-03-09 00:38:21.086638 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-09 00:38:21.086648 | orchestrator | 2026-03-09 00:38:21.086659 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-09 00:38:21.086670 | orchestrator | Monday 09 March 2026 00:38:18 +0000 (0:00:00.128) 0:00:03.428 ********** 2026-03-09 00:38:21.086681 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:38:21.086692 | orchestrator | 2026-03-09 00:38:21.086702 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-09 00:38:21.086713 | orchestrator | Monday 09 March 2026 00:38:18 +0000 (0:00:00.125) 0:00:03.554 ********** 2026-03-09 00:38:21.086724 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:38:21.086759 | orchestrator | 2026-03-09 00:38:21.086771 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-09 00:38:21.086782 | orchestrator | Monday 09 March 2026 00:38:18 +0000 (0:00:00.683) 0:00:04.237 ********** 2026-03-09 00:38:21.086793 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:38:21.086804 | orchestrator | 2026-03-09 00:38:21.086815 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-09 00:38:21.086826 | orchestrator | 2026-03-09 00:38:21.086837 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-09 00:38:21.086847 | orchestrator | Monday 09 March 2026 00:38:19 +0000 (0:00:00.128) 0:00:04.366 ********** 2026-03-09 00:38:21.086867 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:38:21.086887 | orchestrator | 2026-03-09 00:38:21.086907 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-09 00:38:21.086928 | orchestrator | Monday 09 March 2026 00:38:19 +0000 (0:00:00.116) 0:00:04.483 ********** 2026-03-09 00:38:21.086948 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:38:21.086967 | orchestrator | 2026-03-09 00:38:21.086980 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-09 00:38:21.086991 | orchestrator | Monday 09 March 2026 00:38:19 +0000 (0:00:00.647) 0:00:05.131 ********** 2026-03-09 00:38:21.087001 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:38:21.087012 | orchestrator | 2026-03-09 00:38:21.087023 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-09 00:38:21.087034 | orchestrator | 2026-03-09 00:38:21.087044 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-09 00:38:21.087055 | orchestrator | Monday 09 March 2026 00:38:19 +0000 (0:00:00.124) 0:00:05.255 ********** 2026-03-09 00:38:21.087066 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:38:21.087077 | orchestrator | 2026-03-09 00:38:21.087088 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-09 00:38:21.087098 | orchestrator | Monday 09 March 2026 00:38:20 +0000 (0:00:00.098) 0:00:05.354 ********** 2026-03-09 00:38:21.087109 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:38:21.087120 | orchestrator | 2026-03-09 00:38:21.087131 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-09 00:38:21.087142 | orchestrator | Monday 09 March 2026 00:38:20 +0000 (0:00:00.699) 0:00:06.053 ********** 2026-03-09 00:38:21.087171 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:38:21.087183 | orchestrator | 2026-03-09 00:38:21.087194 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:38:21.087206 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:38:21.087218 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:38:21.087229 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:38:21.087240 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:38:21.087251 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:38:21.087262 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:38:21.087273 | orchestrator | 2026-03-09 00:38:21.087284 | orchestrator | 2026-03-09 00:38:21.087295 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:38:21.087306 | orchestrator | Monday 09 March 2026 00:38:20 +0000 (0:00:00.035) 0:00:06.089 ********** 2026-03-09 00:38:21.087317 | orchestrator | =============================================================================== 2026-03-09 00:38:21.087338 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.37s 2026-03-09 00:38:21.087349 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.81s 2026-03-09 00:38:21.087360 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.68s 2026-03-09 00:38:21.416900 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-03-09 00:38:33.507449 | orchestrator | 2026-03-09 00:38:33 | INFO  | Prepare task for execution of wait-for-connection. 2026-03-09 00:38:33.576269 | orchestrator | 2026-03-09 00:38:33 | INFO  | Task e4eff77b-e252-487a-9107-bc843660a8a0 (wait-for-connection) was prepared for execution. 2026-03-09 00:38:33.576383 | orchestrator | 2026-03-09 00:38:33 | INFO  | It takes a moment until task e4eff77b-e252-487a-9107-bc843660a8a0 (wait-for-connection) has been started and output is visible here. 2026-03-09 00:38:49.746119 | orchestrator | 2026-03-09 00:38:49.746234 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-03-09 00:38:49.746250 | orchestrator | 2026-03-09 00:38:49.746261 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-03-09 00:38:49.746273 | orchestrator | Monday 09 March 2026 00:38:37 +0000 (0:00:00.231) 0:00:00.231 ********** 2026-03-09 00:38:49.746283 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:38:49.746294 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:38:49.746304 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:38:49.746314 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:38:49.746324 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:38:49.746333 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:38:49.746343 | orchestrator | 2026-03-09 00:38:49.746353 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:38:49.746364 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:38:49.746375 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:38:49.746386 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:38:49.746395 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:38:49.746405 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:38:49.746415 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:38:49.746425 | orchestrator | 2026-03-09 00:38:49.746435 | orchestrator | 2026-03-09 00:38:49.746445 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:38:49.746454 | orchestrator | Monday 09 March 2026 00:38:49 +0000 (0:00:11.569) 0:00:11.800 ********** 2026-03-09 00:38:49.746464 | orchestrator | =============================================================================== 2026-03-09 00:38:49.746474 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.57s 2026-03-09 00:38:50.061983 | orchestrator | + osism apply hddtemp 2026-03-09 00:39:02.145341 | orchestrator | 2026-03-09 00:39:02 | INFO  | Prepare task for execution of hddtemp. 2026-03-09 00:39:02.216857 | orchestrator | 2026-03-09 00:39:02 | INFO  | Task 6b66d997-cfb7-43b4-8daa-f19242680f43 (hddtemp) was prepared for execution. 2026-03-09 00:39:02.216948 | orchestrator | 2026-03-09 00:39:02 | INFO  | It takes a moment until task 6b66d997-cfb7-43b4-8daa-f19242680f43 (hddtemp) has been started and output is visible here. 2026-03-09 00:39:29.610482 | orchestrator | 2026-03-09 00:39:29.610694 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-03-09 00:39:29.610750 | orchestrator | 2026-03-09 00:39:29.610764 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-03-09 00:39:29.610775 | orchestrator | Monday 09 March 2026 00:39:06 +0000 (0:00:00.272) 0:00:00.272 ********** 2026-03-09 00:39:29.610786 | orchestrator | ok: [testbed-manager] 2026-03-09 00:39:29.610798 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:39:29.610810 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:39:29.610820 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:39:29.610831 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:39:29.610842 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:39:29.610852 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:39:29.610863 | orchestrator | 2026-03-09 00:39:29.610874 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-03-09 00:39:29.610884 | orchestrator | Monday 09 March 2026 00:39:07 +0000 (0:00:00.720) 0:00:00.992 ********** 2026-03-09 00:39:29.610897 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:39:29.610911 | orchestrator | 2026-03-09 00:39:29.610922 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-03-09 00:39:29.610933 | orchestrator | Monday 09 March 2026 00:39:08 +0000 (0:00:01.245) 0:00:02.238 ********** 2026-03-09 00:39:29.610943 | orchestrator | ok: [testbed-manager] 2026-03-09 00:39:29.610954 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:39:29.610964 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:39:29.610975 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:39:29.610985 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:39:29.610996 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:39:29.611009 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:39:29.611022 | orchestrator | 2026-03-09 00:39:29.611035 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-03-09 00:39:29.611047 | orchestrator | Monday 09 March 2026 00:39:10 +0000 (0:00:01.694) 0:00:03.932 ********** 2026-03-09 00:39:29.611059 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:39:29.611073 | orchestrator | changed: [testbed-manager] 2026-03-09 00:39:29.611085 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:39:29.611097 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:39:29.611110 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:39:29.611122 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:39:29.611149 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:39:29.611163 | orchestrator | 2026-03-09 00:39:29.611177 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-03-09 00:39:29.611189 | orchestrator | Monday 09 March 2026 00:39:11 +0000 (0:00:01.268) 0:00:05.200 ********** 2026-03-09 00:39:29.611201 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:39:29.611214 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:39:29.611227 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:39:29.611239 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:39:29.611251 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:39:29.611263 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:39:29.611276 | orchestrator | ok: [testbed-manager] 2026-03-09 00:39:29.611289 | orchestrator | 2026-03-09 00:39:29.611300 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-03-09 00:39:29.611311 | orchestrator | Monday 09 March 2026 00:39:12 +0000 (0:00:01.216) 0:00:06.417 ********** 2026-03-09 00:39:29.611322 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:39:29.611332 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:39:29.611343 | orchestrator | changed: [testbed-manager] 2026-03-09 00:39:29.611354 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:39:29.611364 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:39:29.611375 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:39:29.611385 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:39:29.611396 | orchestrator | 2026-03-09 00:39:29.611407 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-03-09 00:39:29.611426 | orchestrator | Monday 09 March 2026 00:39:13 +0000 (0:00:00.843) 0:00:07.260 ********** 2026-03-09 00:39:29.611437 | orchestrator | changed: [testbed-manager] 2026-03-09 00:39:29.611448 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:39:29.611458 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:39:29.611469 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:39:29.611479 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:39:29.611490 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:39:29.611500 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:39:29.611511 | orchestrator | 2026-03-09 00:39:29.611522 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-03-09 00:39:29.611567 | orchestrator | Monday 09 March 2026 00:39:26 +0000 (0:00:12.798) 0:00:20.059 ********** 2026-03-09 00:39:29.611581 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:39:29.611592 | orchestrator | 2026-03-09 00:39:29.611603 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-03-09 00:39:29.611613 | orchestrator | Monday 09 March 2026 00:39:27 +0000 (0:00:01.107) 0:00:21.166 ********** 2026-03-09 00:39:29.611624 | orchestrator | changed: [testbed-manager] 2026-03-09 00:39:29.611635 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:39:29.611645 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:39:29.611656 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:39:29.611670 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:39:29.611689 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:39:29.611707 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:39:29.611724 | orchestrator | 2026-03-09 00:39:29.611742 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:39:29.611761 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:39:29.611803 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:39:29.611824 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:39:29.611842 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:39:29.611861 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:39:29.611880 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:39:29.611899 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:39:29.611915 | orchestrator | 2026-03-09 00:39:29.611926 | orchestrator | 2026-03-09 00:39:29.611937 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:39:29.611948 | orchestrator | Monday 09 March 2026 00:39:29 +0000 (0:00:01.875) 0:00:23.041 ********** 2026-03-09 00:39:29.611963 | orchestrator | =============================================================================== 2026-03-09 00:39:29.611981 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.80s 2026-03-09 00:39:29.611999 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.88s 2026-03-09 00:39:29.612016 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.69s 2026-03-09 00:39:29.612046 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.27s 2026-03-09 00:39:29.612066 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.25s 2026-03-09 00:39:29.612083 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.22s 2026-03-09 00:39:29.612103 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.11s 2026-03-09 00:39:29.612122 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.84s 2026-03-09 00:39:29.612140 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.72s 2026-03-09 00:39:29.961358 | orchestrator | ++ semver latest 7.1.1 2026-03-09 00:39:30.004998 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-09 00:39:30.005097 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-09 00:39:30.005113 | orchestrator | + sudo systemctl restart manager.service 2026-03-09 00:40:11.464603 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-09 00:40:11.464693 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-09 00:40:11.464709 | orchestrator | + local max_attempts=60 2026-03-09 00:40:11.464722 | orchestrator | + local name=ceph-ansible 2026-03-09 00:40:11.464733 | orchestrator | + local attempt_num=1 2026-03-09 00:40:11.464745 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:40:11.495767 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-09 00:40:11.495839 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-09 00:40:11.495852 | orchestrator | + sleep 5 2026-03-09 00:40:16.500918 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:40:16.530307 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-09 00:40:16.530402 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-09 00:40:16.530425 | orchestrator | + sleep 5 2026-03-09 00:40:21.533815 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:40:21.572403 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-09 00:40:21.572483 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-09 00:40:21.572497 | orchestrator | + sleep 5 2026-03-09 00:40:26.577292 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:40:26.608195 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-09 00:40:26.608280 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-09 00:40:26.608291 | orchestrator | + sleep 5 2026-03-09 00:40:31.612661 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:40:31.654582 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-09 00:40:31.654692 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-09 00:40:31.654709 | orchestrator | + sleep 5 2026-03-09 00:40:36.660214 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:40:36.698950 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-09 00:40:36.699040 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-09 00:40:36.699054 | orchestrator | + sleep 5 2026-03-09 00:40:41.703976 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:40:41.744281 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-09 00:40:41.744370 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-09 00:40:41.744384 | orchestrator | + sleep 5 2026-03-09 00:40:46.750589 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:40:46.782375 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-09 00:40:46.782471 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-09 00:40:46.782488 | orchestrator | + sleep 5 2026-03-09 00:40:51.786560 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:40:51.841252 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-09 00:40:51.841733 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-09 00:40:51.841760 | orchestrator | + sleep 5 2026-03-09 00:40:56.845102 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:40:56.879976 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-09 00:40:56.880072 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-09 00:40:56.880085 | orchestrator | + sleep 5 2026-03-09 00:41:01.884896 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:41:01.935513 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-09 00:41:01.935690 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-09 00:41:01.935737 | orchestrator | + sleep 5 2026-03-09 00:41:06.941954 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:41:06.982222 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-09 00:41:06.982333 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-09 00:41:06.982354 | orchestrator | + sleep 5 2026-03-09 00:41:11.987098 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:41:12.021675 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-09 00:41:12.021773 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-09 00:41:12.021788 | orchestrator | + sleep 5 2026-03-09 00:41:17.026887 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:41:17.062701 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-09 00:41:17.062791 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-09 00:41:17.062802 | orchestrator | + local max_attempts=60 2026-03-09 00:41:17.062811 | orchestrator | + local name=kolla-ansible 2026-03-09 00:41:17.062883 | orchestrator | + local attempt_num=1 2026-03-09 00:41:17.062905 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-09 00:41:17.101251 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-09 00:41:17.101331 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-09 00:41:17.101341 | orchestrator | + local max_attempts=60 2026-03-09 00:41:17.101349 | orchestrator | + local name=osism-ansible 2026-03-09 00:41:17.101357 | orchestrator | + local attempt_num=1 2026-03-09 00:41:17.102380 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-09 00:41:17.137050 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-09 00:41:17.137133 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-09 00:41:17.137143 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-09 00:41:17.296277 | orchestrator | ARA in ceph-ansible already disabled. 2026-03-09 00:41:17.448791 | orchestrator | ARA in kolla-ansible already disabled. 2026-03-09 00:41:17.783290 | orchestrator | ARA in osism-kubernetes already disabled. 2026-03-09 00:41:17.783515 | orchestrator | + osism apply gather-facts 2026-03-09 00:41:29.920984 | orchestrator | 2026-03-09 00:41:29 | INFO  | Prepare task for execution of gather-facts. 2026-03-09 00:41:30.003763 | orchestrator | 2026-03-09 00:41:30 | INFO  | Task df189d79-1eff-42f4-b384-e0e5781a0cfe (gather-facts) was prepared for execution. 2026-03-09 00:41:30.003878 | orchestrator | 2026-03-09 00:41:30 | INFO  | It takes a moment until task df189d79-1eff-42f4-b384-e0e5781a0cfe (gather-facts) has been started and output is visible here. 2026-03-09 00:41:43.770109 | orchestrator | 2026-03-09 00:41:43.770233 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-09 00:41:43.770257 | orchestrator | 2026-03-09 00:41:43.770271 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-09 00:41:43.770286 | orchestrator | Monday 09 March 2026 00:41:34 +0000 (0:00:00.231) 0:00:00.231 ********** 2026-03-09 00:41:43.770300 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:41:43.770314 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:41:43.770328 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:41:43.770342 | orchestrator | ok: [testbed-manager] 2026-03-09 00:41:43.770356 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:41:43.770369 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:41:43.770403 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:41:43.770418 | orchestrator | 2026-03-09 00:41:43.770431 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-09 00:41:43.770444 | orchestrator | 2026-03-09 00:41:43.770458 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-09 00:41:43.770472 | orchestrator | Monday 09 March 2026 00:41:42 +0000 (0:00:08.268) 0:00:08.499 ********** 2026-03-09 00:41:43.770487 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:41:43.770500 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:41:43.770513 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:41:43.770636 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:41:43.770650 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:41:43.770664 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:41:43.770708 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:41:43.770724 | orchestrator | 2026-03-09 00:41:43.770739 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:41:43.770753 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:41:43.770767 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:41:43.770780 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:41:43.770794 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:41:43.770806 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:41:43.770820 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:41:43.770833 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:41:43.770846 | orchestrator | 2026-03-09 00:41:43.770859 | orchestrator | 2026-03-09 00:41:43.770872 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:41:43.770886 | orchestrator | Monday 09 March 2026 00:41:43 +0000 (0:00:00.569) 0:00:09.069 ********** 2026-03-09 00:41:43.770900 | orchestrator | =============================================================================== 2026-03-09 00:41:43.770913 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.27s 2026-03-09 00:41:43.770927 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.57s 2026-03-09 00:41:44.134417 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-03-09 00:41:44.153676 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-03-09 00:41:44.176915 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-03-09 00:41:44.197687 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-03-09 00:41:44.212957 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-03-09 00:41:44.232705 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-03-09 00:41:44.248716 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-03-09 00:41:44.266313 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-03-09 00:41:44.283820 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-03-09 00:41:44.302069 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-03-09 00:41:44.322801 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-03-09 00:41:44.342053 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-03-09 00:41:44.363790 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-03-09 00:41:44.377340 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-03-09 00:41:44.394606 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-03-09 00:41:44.406873 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-03-09 00:41:44.424346 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-03-09 00:41:44.435449 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-03-09 00:41:44.447424 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-03-09 00:41:44.464106 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-03-09 00:41:44.475887 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-03-09 00:41:44.498579 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-03-09 00:41:44.516128 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-03-09 00:41:44.540554 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-09 00:41:44.862575 | orchestrator | ok: Runtime: 0:25:23.945377 2026-03-09 00:41:44.968698 | 2026-03-09 00:41:44.968851 | TASK [Deploy services] 2026-03-09 00:41:45.503727 | orchestrator | skipping: Conditional result was False 2026-03-09 00:41:45.523977 | 2026-03-09 00:41:45.524169 | TASK [Deploy in a nutshell] 2026-03-09 00:41:46.234422 | orchestrator | + set -e 2026-03-09 00:41:46.234660 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-09 00:41:46.234686 | orchestrator | ++ export INTERACTIVE=false 2026-03-09 00:41:46.234708 | orchestrator | ++ INTERACTIVE=false 2026-03-09 00:41:46.234722 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-09 00:41:46.234735 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-09 00:41:46.234749 | orchestrator | + source /opt/manager-vars.sh 2026-03-09 00:41:46.234793 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-09 00:41:46.234822 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-09 00:41:46.234836 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-09 00:41:46.234851 | orchestrator | ++ CEPH_VERSION=reef 2026-03-09 00:41:46.234863 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-09 00:41:46.234883 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-09 00:41:46.234894 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-09 00:41:46.234915 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-09 00:41:46.234926 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-03-09 00:41:46.234941 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-03-09 00:41:46.234952 | orchestrator | ++ export ARA=false 2026-03-09 00:41:46.234963 | orchestrator | ++ ARA=false 2026-03-09 00:41:46.234989 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-09 00:41:46.235002 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-09 00:41:46.235013 | orchestrator | ++ export TEMPEST=true 2026-03-09 00:41:46.235024 | orchestrator | ++ TEMPEST=true 2026-03-09 00:41:46.235034 | orchestrator | ++ export IS_ZUUL=true 2026-03-09 00:41:46.235046 | orchestrator | ++ IS_ZUUL=true 2026-03-09 00:41:46.235057 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.217 2026-03-09 00:41:46.235068 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.217 2026-03-09 00:41:46.235079 | orchestrator | ++ export EXTERNAL_API=false 2026-03-09 00:41:46.235090 | orchestrator | ++ EXTERNAL_API=false 2026-03-09 00:41:46.235101 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-09 00:41:46.235112 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-09 00:41:46.235123 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-09 00:41:46.235133 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-09 00:41:46.235145 | orchestrator | 2026-03-09 00:41:46.235156 | orchestrator | # PULL IMAGES 2026-03-09 00:41:46.235167 | orchestrator | 2026-03-09 00:41:46.235178 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-09 00:41:46.235195 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-09 00:41:46.235207 | orchestrator | + echo 2026-03-09 00:41:46.235218 | orchestrator | + echo '# PULL IMAGES' 2026-03-09 00:41:46.235229 | orchestrator | + echo 2026-03-09 00:41:46.235751 | orchestrator | ++ semver latest 7.0.0 2026-03-09 00:41:46.302214 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-09 00:41:46.302302 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-09 00:41:46.302314 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-09 00:41:48.363142 | orchestrator | 2026-03-09 00:41:48 | INFO  | Trying to run play pull-images in environment custom 2026-03-09 00:41:58.478140 | orchestrator | 2026-03-09 00:41:58 | INFO  | Prepare task for execution of pull-images. 2026-03-09 00:41:58.558380 | orchestrator | 2026-03-09 00:41:58 | INFO  | Task df48f51a-ef09-4c32-a260-6126dee407e2 (pull-images) was prepared for execution. 2026-03-09 00:41:58.558489 | orchestrator | 2026-03-09 00:41:58 | INFO  | Task df48f51a-ef09-4c32-a260-6126dee407e2 is running in background. No more output. Check ARA for logs. 2026-03-09 00:42:01.169828 | orchestrator | 2026-03-09 00:42:01 | INFO  | Trying to run play wipe-partitions in environment custom 2026-03-09 00:42:11.237237 | orchestrator | 2026-03-09 00:42:11 | INFO  | Prepare task for execution of wipe-partitions. 2026-03-09 00:42:11.317084 | orchestrator | 2026-03-09 00:42:11 | INFO  | Task 514dbeb9-2fbf-4c8b-bb28-1b251460a042 (wipe-partitions) was prepared for execution. 2026-03-09 00:42:11.317171 | orchestrator | 2026-03-09 00:42:11 | INFO  | It takes a moment until task 514dbeb9-2fbf-4c8b-bb28-1b251460a042 (wipe-partitions) has been started and output is visible here. 2026-03-09 00:42:23.593751 | orchestrator | 2026-03-09 00:42:23.593844 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-03-09 00:42:23.593858 | orchestrator | 2026-03-09 00:42:23.593929 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-03-09 00:42:23.593949 | orchestrator | Monday 09 March 2026 00:42:15 +0000 (0:00:00.132) 0:00:00.132 ********** 2026-03-09 00:42:23.593986 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:42:23.593997 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:42:23.594005 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:42:23.594013 | orchestrator | 2026-03-09 00:42:23.594074 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-03-09 00:42:23.594082 | orchestrator | Monday 09 March 2026 00:42:16 +0000 (0:00:00.532) 0:00:00.664 ********** 2026-03-09 00:42:23.594094 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:42:23.594129 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:42:23.594139 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:42:23.594147 | orchestrator | 2026-03-09 00:42:23.594156 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-03-09 00:42:23.594164 | orchestrator | Monday 09 March 2026 00:42:16 +0000 (0:00:00.359) 0:00:01.024 ********** 2026-03-09 00:42:23.594172 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:42:23.594183 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:42:23.594196 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:42:23.594210 | orchestrator | 2026-03-09 00:42:23.594218 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-03-09 00:42:23.594226 | orchestrator | Monday 09 March 2026 00:42:17 +0000 (0:00:00.531) 0:00:01.556 ********** 2026-03-09 00:42:23.594235 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:42:23.594243 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:42:23.594251 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:42:23.594258 | orchestrator | 2026-03-09 00:42:23.594267 | orchestrator | TASK [Check device availability] *********************************************** 2026-03-09 00:42:23.594275 | orchestrator | Monday 09 March 2026 00:42:17 +0000 (0:00:00.263) 0:00:01.820 ********** 2026-03-09 00:42:23.594283 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-09 00:42:23.594294 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-09 00:42:23.594303 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-09 00:42:23.594311 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-09 00:42:23.594319 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-09 00:42:23.594327 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-09 00:42:23.594335 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-09 00:42:23.594343 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-09 00:42:23.594351 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-09 00:42:23.594360 | orchestrator | 2026-03-09 00:42:23.594368 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-03-09 00:42:23.594376 | orchestrator | Monday 09 March 2026 00:42:18 +0000 (0:00:01.124) 0:00:02.944 ********** 2026-03-09 00:42:23.594385 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-03-09 00:42:23.594393 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-03-09 00:42:23.594401 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-03-09 00:42:23.594409 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-03-09 00:42:23.594417 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-03-09 00:42:23.594425 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-03-09 00:42:23.594433 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-03-09 00:42:23.594441 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-03-09 00:42:23.594449 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-03-09 00:42:23.594457 | orchestrator | 2026-03-09 00:42:23.594471 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-03-09 00:42:23.594480 | orchestrator | Monday 09 March 2026 00:42:20 +0000 (0:00:01.420) 0:00:04.364 ********** 2026-03-09 00:42:23.594488 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-09 00:42:23.594496 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-09 00:42:23.594504 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-09 00:42:23.594567 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-09 00:42:23.594586 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-09 00:42:23.594595 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-09 00:42:23.594603 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-09 00:42:23.594610 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-09 00:42:23.594618 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-09 00:42:23.594626 | orchestrator | 2026-03-09 00:42:23.594634 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-03-09 00:42:23.594642 | orchestrator | Monday 09 March 2026 00:42:22 +0000 (0:00:01.956) 0:00:06.321 ********** 2026-03-09 00:42:23.594650 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:42:23.594658 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:42:23.594666 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:42:23.594674 | orchestrator | 2026-03-09 00:42:23.594682 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-03-09 00:42:23.594690 | orchestrator | Monday 09 March 2026 00:42:22 +0000 (0:00:00.559) 0:00:06.880 ********** 2026-03-09 00:42:23.594698 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:42:23.594706 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:42:23.594714 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:42:23.594723 | orchestrator | 2026-03-09 00:42:23.594731 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:42:23.594740 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:42:23.594750 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:42:23.594775 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:42:23.594784 | orchestrator | 2026-03-09 00:42:23.594792 | orchestrator | 2026-03-09 00:42:23.594800 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:42:23.594808 | orchestrator | Monday 09 March 2026 00:42:23 +0000 (0:00:00.596) 0:00:07.477 ********** 2026-03-09 00:42:23.594816 | orchestrator | =============================================================================== 2026-03-09 00:42:23.594824 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 1.96s 2026-03-09 00:42:23.594832 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.42s 2026-03-09 00:42:23.594840 | orchestrator | Check device availability ----------------------------------------------- 1.12s 2026-03-09 00:42:23.594848 | orchestrator | Request device events from the kernel ----------------------------------- 0.60s 2026-03-09 00:42:23.594855 | orchestrator | Reload udev rules ------------------------------------------------------- 0.56s 2026-03-09 00:42:23.594863 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.53s 2026-03-09 00:42:23.594871 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.53s 2026-03-09 00:42:23.594879 | orchestrator | Remove all rook related logical devices --------------------------------- 0.36s 2026-03-09 00:42:23.594887 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.26s 2026-03-09 00:42:36.025489 | orchestrator | 2026-03-09 00:42:36 | INFO  | Prepare task for execution of facts. 2026-03-09 00:42:36.097998 | orchestrator | 2026-03-09 00:42:36 | INFO  | Task 95a8aebf-c7a4-4204-9a52-61bb6477336f (facts) was prepared for execution. 2026-03-09 00:42:36.098176 | orchestrator | 2026-03-09 00:42:36 | INFO  | It takes a moment until task 95a8aebf-c7a4-4204-9a52-61bb6477336f (facts) has been started and output is visible here. 2026-03-09 00:42:49.411458 | orchestrator | 2026-03-09 00:42:49.411628 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-09 00:42:49.411647 | orchestrator | 2026-03-09 00:42:49.411685 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-09 00:42:49.411697 | orchestrator | Monday 09 March 2026 00:42:40 +0000 (0:00:00.297) 0:00:00.297 ********** 2026-03-09 00:42:49.411708 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:42:49.411721 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:42:49.411731 | orchestrator | ok: [testbed-manager] 2026-03-09 00:42:49.411742 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:42:49.411764 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:42:49.411776 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:42:49.411786 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:42:49.411797 | orchestrator | 2026-03-09 00:42:49.411808 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-09 00:42:49.411819 | orchestrator | Monday 09 March 2026 00:42:41 +0000 (0:00:01.240) 0:00:01.537 ********** 2026-03-09 00:42:49.411830 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:42:49.411842 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:42:49.411853 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:42:49.411864 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:42:49.411874 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:42:49.411885 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:42:49.411896 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:42:49.411907 | orchestrator | 2026-03-09 00:42:49.411918 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-09 00:42:49.411947 | orchestrator | 2026-03-09 00:42:49.411959 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-09 00:42:49.411971 | orchestrator | Monday 09 March 2026 00:42:42 +0000 (0:00:01.309) 0:00:02.847 ********** 2026-03-09 00:42:49.411982 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:42:49.411993 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:42:49.412003 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:42:49.412014 | orchestrator | ok: [testbed-manager] 2026-03-09 00:42:49.412025 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:42:49.412036 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:42:49.412046 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:42:49.412057 | orchestrator | 2026-03-09 00:42:49.412068 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-09 00:42:49.412079 | orchestrator | 2026-03-09 00:42:49.412090 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-09 00:42:49.412101 | orchestrator | Monday 09 March 2026 00:42:48 +0000 (0:00:05.676) 0:00:08.523 ********** 2026-03-09 00:42:49.412112 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:42:49.412123 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:42:49.412134 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:42:49.412144 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:42:49.412155 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:42:49.412166 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:42:49.412176 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:42:49.412187 | orchestrator | 2026-03-09 00:42:49.412198 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:42:49.412210 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:42:49.412222 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:42:49.412233 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:42:49.412244 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:42:49.412254 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:42:49.412276 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:42:49.412287 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:42:49.412298 | orchestrator | 2026-03-09 00:42:49.412309 | orchestrator | 2026-03-09 00:42:49.412320 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:42:49.412330 | orchestrator | Monday 09 March 2026 00:42:49 +0000 (0:00:00.528) 0:00:09.051 ********** 2026-03-09 00:42:49.412341 | orchestrator | =============================================================================== 2026-03-09 00:42:49.412352 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.68s 2026-03-09 00:42:49.412363 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.31s 2026-03-09 00:42:49.412374 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.24s 2026-03-09 00:42:49.412385 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2026-03-09 00:42:51.894682 | orchestrator | 2026-03-09 00:42:51 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-03-09 00:42:51.958792 | orchestrator | 2026-03-09 00:42:51 | INFO  | Task 43606999-9a0c-428d-9ff4-64d08d7b20a9 (ceph-configure-lvm-volumes) was prepared for execution. 2026-03-09 00:42:51.958905 | orchestrator | 2026-03-09 00:42:51 | INFO  | It takes a moment until task 43606999-9a0c-428d-9ff4-64d08d7b20a9 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-03-09 00:43:04.140758 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-09 00:43:04.140866 | orchestrator | 2.16.14 2026-03-09 00:43:04.140883 | orchestrator | 2026-03-09 00:43:04.140896 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-09 00:43:04.140908 | orchestrator | 2026-03-09 00:43:04.140919 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-09 00:43:04.140931 | orchestrator | Monday 09 March 2026 00:42:56 +0000 (0:00:00.334) 0:00:00.334 ********** 2026-03-09 00:43:04.140942 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-09 00:43:04.140954 | orchestrator | 2026-03-09 00:43:04.140965 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-09 00:43:04.140976 | orchestrator | Monday 09 March 2026 00:42:56 +0000 (0:00:00.258) 0:00:00.592 ********** 2026-03-09 00:43:04.140988 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:43:04.140999 | orchestrator | 2026-03-09 00:43:04.141010 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:04.141021 | orchestrator | Monday 09 March 2026 00:42:57 +0000 (0:00:00.206) 0:00:00.798 ********** 2026-03-09 00:43:04.141043 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-09 00:43:04.141055 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-09 00:43:04.141066 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-09 00:43:04.141078 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-09 00:43:04.141089 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-09 00:43:04.141099 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-09 00:43:04.141110 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-09 00:43:04.141121 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-09 00:43:04.141132 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-09 00:43:04.141143 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-09 00:43:04.141176 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-09 00:43:04.141188 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-09 00:43:04.141199 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-09 00:43:04.141210 | orchestrator | 2026-03-09 00:43:04.141221 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:04.141232 | orchestrator | Monday 09 March 2026 00:42:57 +0000 (0:00:00.500) 0:00:01.299 ********** 2026-03-09 00:43:04.141243 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:04.141253 | orchestrator | 2026-03-09 00:43:04.141264 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:04.141275 | orchestrator | Monday 09 March 2026 00:42:57 +0000 (0:00:00.192) 0:00:01.491 ********** 2026-03-09 00:43:04.141288 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:04.141301 | orchestrator | 2026-03-09 00:43:04.141314 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:04.141332 | orchestrator | Monday 09 March 2026 00:42:57 +0000 (0:00:00.209) 0:00:01.701 ********** 2026-03-09 00:43:04.141346 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:04.141358 | orchestrator | 2026-03-09 00:43:04.141371 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:04.141385 | orchestrator | Monday 09 March 2026 00:42:58 +0000 (0:00:00.189) 0:00:01.891 ********** 2026-03-09 00:43:04.141398 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:04.141411 | orchestrator | 2026-03-09 00:43:04.141424 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:04.141436 | orchestrator | Monday 09 March 2026 00:42:58 +0000 (0:00:00.204) 0:00:02.095 ********** 2026-03-09 00:43:04.141450 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:04.141463 | orchestrator | 2026-03-09 00:43:04.141477 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:04.141490 | orchestrator | Monday 09 March 2026 00:42:58 +0000 (0:00:00.205) 0:00:02.301 ********** 2026-03-09 00:43:04.141503 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:04.141563 | orchestrator | 2026-03-09 00:43:04.141576 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:04.141588 | orchestrator | Monday 09 March 2026 00:42:58 +0000 (0:00:00.217) 0:00:02.519 ********** 2026-03-09 00:43:04.141601 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:04.141613 | orchestrator | 2026-03-09 00:43:04.141627 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:04.141640 | orchestrator | Monday 09 March 2026 00:42:58 +0000 (0:00:00.204) 0:00:02.724 ********** 2026-03-09 00:43:04.141651 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:04.141662 | orchestrator | 2026-03-09 00:43:04.141673 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:04.141683 | orchestrator | Monday 09 March 2026 00:42:59 +0000 (0:00:00.197) 0:00:02.921 ********** 2026-03-09 00:43:04.141694 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d) 2026-03-09 00:43:04.141706 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d) 2026-03-09 00:43:04.141717 | orchestrator | 2026-03-09 00:43:04.141728 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:04.141757 | orchestrator | Monday 09 March 2026 00:42:59 +0000 (0:00:00.437) 0:00:03.359 ********** 2026-03-09 00:43:04.141769 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_a6833780-5d8c-49cb-baf4-596d7658d284) 2026-03-09 00:43:04.141780 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_a6833780-5d8c-49cb-baf4-596d7658d284) 2026-03-09 00:43:04.141791 | orchestrator | 2026-03-09 00:43:04.141808 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:04.141828 | orchestrator | Monday 09 March 2026 00:43:00 +0000 (0:00:00.699) 0:00:04.058 ********** 2026-03-09 00:43:04.141840 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d782a267-8601-4e70-9eb9-845bf96c3393) 2026-03-09 00:43:04.141851 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d782a267-8601-4e70-9eb9-845bf96c3393) 2026-03-09 00:43:04.141861 | orchestrator | 2026-03-09 00:43:04.141872 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:04.141883 | orchestrator | Monday 09 March 2026 00:43:00 +0000 (0:00:00.662) 0:00:04.721 ********** 2026-03-09 00:43:04.141894 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e401ede7-34f1-42e1-9654-8299af9dca9f) 2026-03-09 00:43:04.141905 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e401ede7-34f1-42e1-9654-8299af9dca9f) 2026-03-09 00:43:04.141916 | orchestrator | 2026-03-09 00:43:04.141927 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:04.141938 | orchestrator | Monday 09 March 2026 00:43:01 +0000 (0:00:00.928) 0:00:05.649 ********** 2026-03-09 00:43:04.141948 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-09 00:43:04.141959 | orchestrator | 2026-03-09 00:43:04.141970 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:04.141981 | orchestrator | Monday 09 March 2026 00:43:02 +0000 (0:00:00.351) 0:00:06.000 ********** 2026-03-09 00:43:04.141992 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-09 00:43:04.142003 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-09 00:43:04.142050 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-09 00:43:04.142064 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-09 00:43:04.142074 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-09 00:43:04.142085 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-09 00:43:04.142096 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-09 00:43:04.142106 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-09 00:43:04.142117 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-09 00:43:04.142128 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-09 00:43:04.142139 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-09 00:43:04.142149 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-09 00:43:04.142160 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-09 00:43:04.142171 | orchestrator | 2026-03-09 00:43:04.142181 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:04.142192 | orchestrator | Monday 09 March 2026 00:43:02 +0000 (0:00:00.380) 0:00:06.381 ********** 2026-03-09 00:43:04.142203 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:04.142213 | orchestrator | 2026-03-09 00:43:04.142224 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:04.142235 | orchestrator | Monday 09 March 2026 00:43:02 +0000 (0:00:00.228) 0:00:06.609 ********** 2026-03-09 00:43:04.142245 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:04.142256 | orchestrator | 2026-03-09 00:43:04.142267 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:04.142277 | orchestrator | Monday 09 March 2026 00:43:03 +0000 (0:00:00.194) 0:00:06.804 ********** 2026-03-09 00:43:04.142288 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:04.142306 | orchestrator | 2026-03-09 00:43:04.142317 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:04.142327 | orchestrator | Monday 09 March 2026 00:43:03 +0000 (0:00:00.228) 0:00:07.032 ********** 2026-03-09 00:43:04.142338 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:04.142349 | orchestrator | 2026-03-09 00:43:04.142360 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:04.142370 | orchestrator | Monday 09 March 2026 00:43:03 +0000 (0:00:00.202) 0:00:07.235 ********** 2026-03-09 00:43:04.142381 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:04.142391 | orchestrator | 2026-03-09 00:43:04.142402 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:04.142413 | orchestrator | Monday 09 March 2026 00:43:03 +0000 (0:00:00.212) 0:00:07.447 ********** 2026-03-09 00:43:04.142423 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:04.142434 | orchestrator | 2026-03-09 00:43:04.142445 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:04.142456 | orchestrator | Monday 09 March 2026 00:43:03 +0000 (0:00:00.202) 0:00:07.650 ********** 2026-03-09 00:43:04.142467 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:04.142477 | orchestrator | 2026-03-09 00:43:04.142494 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:11.312784 | orchestrator | Monday 09 March 2026 00:43:04 +0000 (0:00:00.212) 0:00:07.862 ********** 2026-03-09 00:43:11.312897 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:11.312925 | orchestrator | 2026-03-09 00:43:11.312943 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:11.312954 | orchestrator | Monday 09 March 2026 00:43:04 +0000 (0:00:00.204) 0:00:08.066 ********** 2026-03-09 00:43:11.312966 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-09 00:43:11.312977 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-09 00:43:11.312988 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-09 00:43:11.312999 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-09 00:43:11.313010 | orchestrator | 2026-03-09 00:43:11.313021 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:11.313048 | orchestrator | Monday 09 March 2026 00:43:05 +0000 (0:00:01.094) 0:00:09.161 ********** 2026-03-09 00:43:11.313060 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:11.313071 | orchestrator | 2026-03-09 00:43:11.313082 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:11.313093 | orchestrator | Monday 09 March 2026 00:43:05 +0000 (0:00:00.212) 0:00:09.373 ********** 2026-03-09 00:43:11.313110 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:11.313127 | orchestrator | 2026-03-09 00:43:11.313139 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:11.313149 | orchestrator | Monday 09 March 2026 00:43:05 +0000 (0:00:00.201) 0:00:09.575 ********** 2026-03-09 00:43:11.313160 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:11.313171 | orchestrator | 2026-03-09 00:43:11.313182 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:11.313194 | orchestrator | Monday 09 March 2026 00:43:06 +0000 (0:00:00.208) 0:00:09.784 ********** 2026-03-09 00:43:11.313213 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:11.313231 | orchestrator | 2026-03-09 00:43:11.313248 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-09 00:43:11.313264 | orchestrator | Monday 09 March 2026 00:43:06 +0000 (0:00:00.212) 0:00:09.996 ********** 2026-03-09 00:43:11.313281 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-03-09 00:43:11.313300 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-03-09 00:43:11.313319 | orchestrator | 2026-03-09 00:43:11.313338 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-09 00:43:11.313358 | orchestrator | Monday 09 March 2026 00:43:06 +0000 (0:00:00.171) 0:00:10.168 ********** 2026-03-09 00:43:11.313429 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:11.313452 | orchestrator | 2026-03-09 00:43:11.313490 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-09 00:43:11.313537 | orchestrator | Monday 09 March 2026 00:43:06 +0000 (0:00:00.141) 0:00:10.310 ********** 2026-03-09 00:43:11.313551 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:11.313562 | orchestrator | 2026-03-09 00:43:11.313573 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-09 00:43:11.313583 | orchestrator | Monday 09 March 2026 00:43:06 +0000 (0:00:00.140) 0:00:10.450 ********** 2026-03-09 00:43:11.313594 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:11.313604 | orchestrator | 2026-03-09 00:43:11.313615 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-09 00:43:11.313638 | orchestrator | Monday 09 March 2026 00:43:06 +0000 (0:00:00.133) 0:00:10.583 ********** 2026-03-09 00:43:11.313650 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:43:11.313669 | orchestrator | 2026-03-09 00:43:11.313689 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-09 00:43:11.313709 | orchestrator | Monday 09 March 2026 00:43:07 +0000 (0:00:00.157) 0:00:10.741 ********** 2026-03-09 00:43:11.313725 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0b4a24c5-7164-5e55-92cc-433a48be10d0'}}) 2026-03-09 00:43:11.313745 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '07cae8b8-d309-58e5-9f3f-3806cd3fe573'}}) 2026-03-09 00:43:11.313756 | orchestrator | 2026-03-09 00:43:11.313767 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-09 00:43:11.313778 | orchestrator | Monday 09 March 2026 00:43:07 +0000 (0:00:00.187) 0:00:10.929 ********** 2026-03-09 00:43:11.313789 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0b4a24c5-7164-5e55-92cc-433a48be10d0'}})  2026-03-09 00:43:11.313813 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '07cae8b8-d309-58e5-9f3f-3806cd3fe573'}})  2026-03-09 00:43:11.313832 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:11.313843 | orchestrator | 2026-03-09 00:43:11.313854 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-09 00:43:11.313865 | orchestrator | Monday 09 March 2026 00:43:07 +0000 (0:00:00.154) 0:00:11.083 ********** 2026-03-09 00:43:11.313876 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0b4a24c5-7164-5e55-92cc-433a48be10d0'}})  2026-03-09 00:43:11.313887 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '07cae8b8-d309-58e5-9f3f-3806cd3fe573'}})  2026-03-09 00:43:11.313898 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:11.313909 | orchestrator | 2026-03-09 00:43:11.313920 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-09 00:43:11.313931 | orchestrator | Monday 09 March 2026 00:43:07 +0000 (0:00:00.358) 0:00:11.442 ********** 2026-03-09 00:43:11.313942 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0b4a24c5-7164-5e55-92cc-433a48be10d0'}})  2026-03-09 00:43:11.313973 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '07cae8b8-d309-58e5-9f3f-3806cd3fe573'}})  2026-03-09 00:43:11.313984 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:11.313995 | orchestrator | 2026-03-09 00:43:11.314006 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-09 00:43:11.314150 | orchestrator | Monday 09 March 2026 00:43:07 +0000 (0:00:00.140) 0:00:11.582 ********** 2026-03-09 00:43:11.314168 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:43:11.314179 | orchestrator | 2026-03-09 00:43:11.314189 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-09 00:43:11.314201 | orchestrator | Monday 09 March 2026 00:43:07 +0000 (0:00:00.135) 0:00:11.717 ********** 2026-03-09 00:43:11.314212 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:43:11.314235 | orchestrator | 2026-03-09 00:43:11.314246 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-09 00:43:11.314257 | orchestrator | Monday 09 March 2026 00:43:08 +0000 (0:00:00.129) 0:00:11.846 ********** 2026-03-09 00:43:11.314267 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:11.314287 | orchestrator | 2026-03-09 00:43:11.314299 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-09 00:43:11.314310 | orchestrator | Monday 09 March 2026 00:43:08 +0000 (0:00:00.134) 0:00:11.981 ********** 2026-03-09 00:43:11.314321 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:11.314331 | orchestrator | 2026-03-09 00:43:11.314342 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-09 00:43:11.314353 | orchestrator | Monday 09 March 2026 00:43:08 +0000 (0:00:00.134) 0:00:12.116 ********** 2026-03-09 00:43:11.314364 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:11.314374 | orchestrator | 2026-03-09 00:43:11.314385 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-09 00:43:11.314399 | orchestrator | Monday 09 March 2026 00:43:08 +0000 (0:00:00.131) 0:00:12.247 ********** 2026-03-09 00:43:11.314415 | orchestrator | ok: [testbed-node-3] => { 2026-03-09 00:43:11.314426 | orchestrator |  "ceph_osd_devices": { 2026-03-09 00:43:11.314437 | orchestrator |  "sdb": { 2026-03-09 00:43:11.314448 | orchestrator |  "osd_lvm_uuid": "0b4a24c5-7164-5e55-92cc-433a48be10d0" 2026-03-09 00:43:11.314460 | orchestrator |  }, 2026-03-09 00:43:11.314471 | orchestrator |  "sdc": { 2026-03-09 00:43:11.314481 | orchestrator |  "osd_lvm_uuid": "07cae8b8-d309-58e5-9f3f-3806cd3fe573" 2026-03-09 00:43:11.314492 | orchestrator |  } 2026-03-09 00:43:11.314503 | orchestrator |  } 2026-03-09 00:43:11.314667 | orchestrator | } 2026-03-09 00:43:11.314686 | orchestrator | 2026-03-09 00:43:11.314697 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-09 00:43:11.314708 | orchestrator | Monday 09 March 2026 00:43:08 +0000 (0:00:00.124) 0:00:12.372 ********** 2026-03-09 00:43:11.314719 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:11.314730 | orchestrator | 2026-03-09 00:43:11.314740 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-09 00:43:11.314768 | orchestrator | Monday 09 March 2026 00:43:08 +0000 (0:00:00.129) 0:00:12.501 ********** 2026-03-09 00:43:11.314780 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:11.314790 | orchestrator | 2026-03-09 00:43:11.314802 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-09 00:43:11.314813 | orchestrator | Monday 09 March 2026 00:43:08 +0000 (0:00:00.130) 0:00:12.632 ********** 2026-03-09 00:43:11.314824 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:11.314835 | orchestrator | 2026-03-09 00:43:11.314846 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-09 00:43:11.314857 | orchestrator | Monday 09 March 2026 00:43:09 +0000 (0:00:00.124) 0:00:12.756 ********** 2026-03-09 00:43:11.314867 | orchestrator | changed: [testbed-node-3] => { 2026-03-09 00:43:11.314878 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-09 00:43:11.314889 | orchestrator |  "ceph_osd_devices": { 2026-03-09 00:43:11.314900 | orchestrator |  "sdb": { 2026-03-09 00:43:11.314911 | orchestrator |  "osd_lvm_uuid": "0b4a24c5-7164-5e55-92cc-433a48be10d0" 2026-03-09 00:43:11.314922 | orchestrator |  }, 2026-03-09 00:43:11.314933 | orchestrator |  "sdc": { 2026-03-09 00:43:11.314944 | orchestrator |  "osd_lvm_uuid": "07cae8b8-d309-58e5-9f3f-3806cd3fe573" 2026-03-09 00:43:11.314955 | orchestrator |  } 2026-03-09 00:43:11.314966 | orchestrator |  }, 2026-03-09 00:43:11.314976 | orchestrator |  "lvm_volumes": [ 2026-03-09 00:43:11.314987 | orchestrator |  { 2026-03-09 00:43:11.315009 | orchestrator |  "data": "osd-block-0b4a24c5-7164-5e55-92cc-433a48be10d0", 2026-03-09 00:43:11.315021 | orchestrator |  "data_vg": "ceph-0b4a24c5-7164-5e55-92cc-433a48be10d0" 2026-03-09 00:43:11.315044 | orchestrator |  }, 2026-03-09 00:43:11.315065 | orchestrator |  { 2026-03-09 00:43:11.315076 | orchestrator |  "data": "osd-block-07cae8b8-d309-58e5-9f3f-3806cd3fe573", 2026-03-09 00:43:11.315087 | orchestrator |  "data_vg": "ceph-07cae8b8-d309-58e5-9f3f-3806cd3fe573" 2026-03-09 00:43:11.315098 | orchestrator |  } 2026-03-09 00:43:11.315109 | orchestrator |  ] 2026-03-09 00:43:11.315120 | orchestrator |  } 2026-03-09 00:43:11.315131 | orchestrator | } 2026-03-09 00:43:11.315142 | orchestrator | 2026-03-09 00:43:11.315153 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-09 00:43:11.315163 | orchestrator | Monday 09 March 2026 00:43:09 +0000 (0:00:00.327) 0:00:13.084 ********** 2026-03-09 00:43:11.315174 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-09 00:43:11.315185 | orchestrator | 2026-03-09 00:43:11.315196 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-09 00:43:11.315207 | orchestrator | 2026-03-09 00:43:11.315218 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-09 00:43:11.315233 | orchestrator | Monday 09 March 2026 00:43:10 +0000 (0:00:01.508) 0:00:14.593 ********** 2026-03-09 00:43:11.315250 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-09 00:43:11.315262 | orchestrator | 2026-03-09 00:43:11.315273 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-09 00:43:11.315283 | orchestrator | Monday 09 March 2026 00:43:11 +0000 (0:00:00.220) 0:00:14.814 ********** 2026-03-09 00:43:11.315294 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:43:11.315305 | orchestrator | 2026-03-09 00:43:11.315331 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:19.779664 | orchestrator | Monday 09 March 2026 00:43:11 +0000 (0:00:00.224) 0:00:15.038 ********** 2026-03-09 00:43:19.779751 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-09 00:43:19.779761 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-09 00:43:19.779768 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-09 00:43:19.779775 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-09 00:43:19.779782 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-09 00:43:19.779789 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-09 00:43:19.779796 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-09 00:43:19.779807 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-09 00:43:19.779814 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-09 00:43:19.779821 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-09 00:43:19.779828 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-09 00:43:19.779835 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-09 00:43:19.779858 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-09 00:43:19.779865 | orchestrator | 2026-03-09 00:43:19.779873 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:19.779880 | orchestrator | Monday 09 March 2026 00:43:11 +0000 (0:00:00.341) 0:00:15.380 ********** 2026-03-09 00:43:19.779887 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:19.779894 | orchestrator | 2026-03-09 00:43:19.779901 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:19.779908 | orchestrator | Monday 09 March 2026 00:43:11 +0000 (0:00:00.204) 0:00:15.584 ********** 2026-03-09 00:43:19.779932 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:19.779939 | orchestrator | 2026-03-09 00:43:19.779946 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:19.779953 | orchestrator | Monday 09 March 2026 00:43:12 +0000 (0:00:00.193) 0:00:15.777 ********** 2026-03-09 00:43:19.779959 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:19.779969 | orchestrator | 2026-03-09 00:43:19.779980 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:19.779988 | orchestrator | Monday 09 March 2026 00:43:12 +0000 (0:00:00.198) 0:00:15.976 ********** 2026-03-09 00:43:19.779994 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:19.780001 | orchestrator | 2026-03-09 00:43:19.780008 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:19.780014 | orchestrator | Monday 09 March 2026 00:43:12 +0000 (0:00:00.207) 0:00:16.184 ********** 2026-03-09 00:43:19.780021 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:19.780028 | orchestrator | 2026-03-09 00:43:19.780034 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:19.780041 | orchestrator | Monday 09 March 2026 00:43:13 +0000 (0:00:00.754) 0:00:16.938 ********** 2026-03-09 00:43:19.780048 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:19.780054 | orchestrator | 2026-03-09 00:43:19.780061 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:19.780068 | orchestrator | Monday 09 March 2026 00:43:13 +0000 (0:00:00.206) 0:00:17.145 ********** 2026-03-09 00:43:19.780074 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:19.780081 | orchestrator | 2026-03-09 00:43:19.780087 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:19.780094 | orchestrator | Monday 09 March 2026 00:43:13 +0000 (0:00:00.186) 0:00:17.332 ********** 2026-03-09 00:43:19.780101 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:19.780107 | orchestrator | 2026-03-09 00:43:19.780114 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:19.780121 | orchestrator | Monday 09 March 2026 00:43:13 +0000 (0:00:00.220) 0:00:17.552 ********** 2026-03-09 00:43:19.780127 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07) 2026-03-09 00:43:19.780135 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07) 2026-03-09 00:43:19.780142 | orchestrator | 2026-03-09 00:43:19.780149 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:19.780155 | orchestrator | Monday 09 March 2026 00:43:14 +0000 (0:00:00.471) 0:00:18.024 ********** 2026-03-09 00:43:19.780162 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bc061b31-9341-4fe1-bc4e-7c107d37f2f9) 2026-03-09 00:43:19.780169 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bc061b31-9341-4fe1-bc4e-7c107d37f2f9) 2026-03-09 00:43:19.780176 | orchestrator | 2026-03-09 00:43:19.780182 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:19.780189 | orchestrator | Monday 09 March 2026 00:43:14 +0000 (0:00:00.464) 0:00:18.489 ********** 2026-03-09 00:43:19.780196 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_32378689-09a5-476b-b0b0-ef0e7774d8c3) 2026-03-09 00:43:19.780202 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_32378689-09a5-476b-b0b0-ef0e7774d8c3) 2026-03-09 00:43:19.780209 | orchestrator | 2026-03-09 00:43:19.780216 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:19.780238 | orchestrator | Monday 09 March 2026 00:43:15 +0000 (0:00:00.449) 0:00:18.938 ********** 2026-03-09 00:43:19.780245 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_96371732-37bf-4fbc-835d-bb1aff74906c) 2026-03-09 00:43:19.780253 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_96371732-37bf-4fbc-835d-bb1aff74906c) 2026-03-09 00:43:19.780262 | orchestrator | 2026-03-09 00:43:19.780275 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:19.780283 | orchestrator | Monday 09 March 2026 00:43:15 +0000 (0:00:00.459) 0:00:19.397 ********** 2026-03-09 00:43:19.780291 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-09 00:43:19.780299 | orchestrator | 2026-03-09 00:43:19.780307 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:19.780315 | orchestrator | Monday 09 March 2026 00:43:16 +0000 (0:00:00.377) 0:00:19.775 ********** 2026-03-09 00:43:19.780323 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-09 00:43:19.780330 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-09 00:43:19.780343 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-09 00:43:19.780351 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-09 00:43:19.780359 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-09 00:43:19.780367 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-09 00:43:19.780378 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-09 00:43:19.780391 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-09 00:43:19.780399 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-09 00:43:19.780407 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-09 00:43:19.780418 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-09 00:43:19.780428 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-09 00:43:19.780436 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-09 00:43:19.780444 | orchestrator | 2026-03-09 00:43:19.780452 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:19.780460 | orchestrator | Monday 09 March 2026 00:43:16 +0000 (0:00:00.474) 0:00:20.250 ********** 2026-03-09 00:43:19.780468 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:19.780475 | orchestrator | 2026-03-09 00:43:19.780482 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:19.780490 | orchestrator | Monday 09 March 2026 00:43:17 +0000 (0:00:00.708) 0:00:20.958 ********** 2026-03-09 00:43:19.780500 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:19.780556 | orchestrator | 2026-03-09 00:43:19.780568 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:19.780576 | orchestrator | Monday 09 March 2026 00:43:17 +0000 (0:00:00.219) 0:00:21.178 ********** 2026-03-09 00:43:19.780583 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:19.780590 | orchestrator | 2026-03-09 00:43:19.780596 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:19.780603 | orchestrator | Monday 09 March 2026 00:43:17 +0000 (0:00:00.194) 0:00:21.373 ********** 2026-03-09 00:43:19.780609 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:19.780616 | orchestrator | 2026-03-09 00:43:19.780622 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:19.780629 | orchestrator | Monday 09 March 2026 00:43:17 +0000 (0:00:00.231) 0:00:21.604 ********** 2026-03-09 00:43:19.780635 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:19.780642 | orchestrator | 2026-03-09 00:43:19.780648 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:19.780655 | orchestrator | Monday 09 March 2026 00:43:18 +0000 (0:00:00.203) 0:00:21.808 ********** 2026-03-09 00:43:19.780662 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:19.780674 | orchestrator | 2026-03-09 00:43:19.780681 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:19.780687 | orchestrator | Monday 09 March 2026 00:43:18 +0000 (0:00:00.234) 0:00:22.043 ********** 2026-03-09 00:43:19.780694 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:19.780700 | orchestrator | 2026-03-09 00:43:19.780707 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:19.780713 | orchestrator | Monday 09 March 2026 00:43:18 +0000 (0:00:00.228) 0:00:22.271 ********** 2026-03-09 00:43:19.780720 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:19.780727 | orchestrator | 2026-03-09 00:43:19.780733 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:19.780740 | orchestrator | Monday 09 March 2026 00:43:18 +0000 (0:00:00.224) 0:00:22.496 ********** 2026-03-09 00:43:19.780746 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-09 00:43:19.780753 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-09 00:43:19.780761 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-09 00:43:19.780767 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-09 00:43:19.780774 | orchestrator | 2026-03-09 00:43:19.780780 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:19.780787 | orchestrator | Monday 09 March 2026 00:43:19 +0000 (0:00:00.876) 0:00:23.373 ********** 2026-03-09 00:43:19.780794 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:27.500066 | orchestrator | 2026-03-09 00:43:27.500186 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:27.500204 | orchestrator | Monday 09 March 2026 00:43:19 +0000 (0:00:00.224) 0:00:23.597 ********** 2026-03-09 00:43:27.500215 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:27.500227 | orchestrator | 2026-03-09 00:43:27.500238 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:27.500249 | orchestrator | Monday 09 March 2026 00:43:20 +0000 (0:00:00.223) 0:00:23.820 ********** 2026-03-09 00:43:27.500259 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:27.500271 | orchestrator | 2026-03-09 00:43:27.500282 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:27.500293 | orchestrator | Monday 09 March 2026 00:43:20 +0000 (0:00:00.196) 0:00:24.017 ********** 2026-03-09 00:43:27.500305 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:27.500316 | orchestrator | 2026-03-09 00:43:27.500327 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-09 00:43:27.500337 | orchestrator | Monday 09 March 2026 00:43:21 +0000 (0:00:00.734) 0:00:24.751 ********** 2026-03-09 00:43:27.500348 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-03-09 00:43:27.500359 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-03-09 00:43:27.500412 | orchestrator | 2026-03-09 00:43:27.500424 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-09 00:43:27.500481 | orchestrator | Monday 09 March 2026 00:43:21 +0000 (0:00:00.201) 0:00:24.952 ********** 2026-03-09 00:43:27.500494 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:27.500531 | orchestrator | 2026-03-09 00:43:27.500544 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-09 00:43:27.500555 | orchestrator | Monday 09 March 2026 00:43:21 +0000 (0:00:00.144) 0:00:25.096 ********** 2026-03-09 00:43:27.500565 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:27.500578 | orchestrator | 2026-03-09 00:43:27.500588 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-09 00:43:27.500605 | orchestrator | Monday 09 March 2026 00:43:21 +0000 (0:00:00.158) 0:00:25.255 ********** 2026-03-09 00:43:27.500617 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:27.500628 | orchestrator | 2026-03-09 00:43:27.500638 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-09 00:43:27.500649 | orchestrator | Monday 09 March 2026 00:43:21 +0000 (0:00:00.155) 0:00:25.410 ********** 2026-03-09 00:43:27.500699 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:43:27.500712 | orchestrator | 2026-03-09 00:43:27.500722 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-09 00:43:27.500733 | orchestrator | Monday 09 March 2026 00:43:21 +0000 (0:00:00.155) 0:00:25.566 ********** 2026-03-09 00:43:27.500744 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9c74837a-43e3-5ea9-9fe0-5cec11260b17'}}) 2026-03-09 00:43:27.500757 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '590958f1-5006-5da8-896c-bdb08f0ac33f'}}) 2026-03-09 00:43:27.500768 | orchestrator | 2026-03-09 00:43:27.500779 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-09 00:43:27.500789 | orchestrator | Monday 09 March 2026 00:43:22 +0000 (0:00:00.181) 0:00:25.747 ********** 2026-03-09 00:43:27.500801 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9c74837a-43e3-5ea9-9fe0-5cec11260b17'}})  2026-03-09 00:43:27.500814 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '590958f1-5006-5da8-896c-bdb08f0ac33f'}})  2026-03-09 00:43:27.500825 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:27.500836 | orchestrator | 2026-03-09 00:43:27.500847 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-09 00:43:27.500858 | orchestrator | Monday 09 March 2026 00:43:22 +0000 (0:00:00.194) 0:00:25.941 ********** 2026-03-09 00:43:27.500869 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9c74837a-43e3-5ea9-9fe0-5cec11260b17'}})  2026-03-09 00:43:27.500880 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '590958f1-5006-5da8-896c-bdb08f0ac33f'}})  2026-03-09 00:43:27.500891 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:27.500902 | orchestrator | 2026-03-09 00:43:27.500925 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-09 00:43:27.500936 | orchestrator | Monday 09 March 2026 00:43:22 +0000 (0:00:00.187) 0:00:26.129 ********** 2026-03-09 00:43:27.500946 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9c74837a-43e3-5ea9-9fe0-5cec11260b17'}})  2026-03-09 00:43:27.500956 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '590958f1-5006-5da8-896c-bdb08f0ac33f'}})  2026-03-09 00:43:27.500966 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:27.500976 | orchestrator | 2026-03-09 00:43:27.500986 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-09 00:43:27.500997 | orchestrator | Monday 09 March 2026 00:43:22 +0000 (0:00:00.198) 0:00:26.327 ********** 2026-03-09 00:43:27.501007 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:43:27.501017 | orchestrator | 2026-03-09 00:43:27.501026 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-09 00:43:27.501058 | orchestrator | Monday 09 March 2026 00:43:22 +0000 (0:00:00.153) 0:00:26.481 ********** 2026-03-09 00:43:27.501068 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:43:27.501078 | orchestrator | 2026-03-09 00:43:27.501089 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-09 00:43:27.501099 | orchestrator | Monday 09 March 2026 00:43:22 +0000 (0:00:00.135) 0:00:26.616 ********** 2026-03-09 00:43:27.501141 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:27.501153 | orchestrator | 2026-03-09 00:43:27.501163 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-09 00:43:27.501174 | orchestrator | Monday 09 March 2026 00:43:23 +0000 (0:00:00.359) 0:00:26.976 ********** 2026-03-09 00:43:27.501185 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:27.501195 | orchestrator | 2026-03-09 00:43:27.501205 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-09 00:43:27.501215 | orchestrator | Monday 09 March 2026 00:43:23 +0000 (0:00:00.129) 0:00:27.105 ********** 2026-03-09 00:43:27.501225 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:27.501246 | orchestrator | 2026-03-09 00:43:27.501257 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-09 00:43:27.501267 | orchestrator | Monday 09 March 2026 00:43:23 +0000 (0:00:00.137) 0:00:27.243 ********** 2026-03-09 00:43:27.501278 | orchestrator | ok: [testbed-node-4] => { 2026-03-09 00:43:27.501288 | orchestrator |  "ceph_osd_devices": { 2026-03-09 00:43:27.501298 | orchestrator |  "sdb": { 2026-03-09 00:43:27.501309 | orchestrator |  "osd_lvm_uuid": "9c74837a-43e3-5ea9-9fe0-5cec11260b17" 2026-03-09 00:43:27.501319 | orchestrator |  }, 2026-03-09 00:43:27.501341 | orchestrator |  "sdc": { 2026-03-09 00:43:27.501352 | orchestrator |  "osd_lvm_uuid": "590958f1-5006-5da8-896c-bdb08f0ac33f" 2026-03-09 00:43:27.501363 | orchestrator |  } 2026-03-09 00:43:27.501373 | orchestrator |  } 2026-03-09 00:43:27.501384 | orchestrator | } 2026-03-09 00:43:27.501417 | orchestrator | 2026-03-09 00:43:27.501429 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-09 00:43:27.501440 | orchestrator | Monday 09 March 2026 00:43:23 +0000 (0:00:00.150) 0:00:27.393 ********** 2026-03-09 00:43:27.501451 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:27.501461 | orchestrator | 2026-03-09 00:43:27.501471 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-09 00:43:27.501483 | orchestrator | Monday 09 March 2026 00:43:23 +0000 (0:00:00.154) 0:00:27.548 ********** 2026-03-09 00:43:27.501493 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:27.501503 | orchestrator | 2026-03-09 00:43:27.501537 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-09 00:43:27.501547 | orchestrator | Monday 09 March 2026 00:43:23 +0000 (0:00:00.161) 0:00:27.710 ********** 2026-03-09 00:43:27.501557 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:27.501567 | orchestrator | 2026-03-09 00:43:27.501577 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-09 00:43:27.501597 | orchestrator | Monday 09 March 2026 00:43:24 +0000 (0:00:00.277) 0:00:27.987 ********** 2026-03-09 00:43:27.501608 | orchestrator | changed: [testbed-node-4] => { 2026-03-09 00:43:27.501642 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-09 00:43:27.501664 | orchestrator |  "ceph_osd_devices": { 2026-03-09 00:43:27.501676 | orchestrator |  "sdb": { 2026-03-09 00:43:27.501696 | orchestrator |  "osd_lvm_uuid": "9c74837a-43e3-5ea9-9fe0-5cec11260b17" 2026-03-09 00:43:27.501708 | orchestrator |  }, 2026-03-09 00:43:27.501718 | orchestrator |  "sdc": { 2026-03-09 00:43:27.501729 | orchestrator |  "osd_lvm_uuid": "590958f1-5006-5da8-896c-bdb08f0ac33f" 2026-03-09 00:43:27.501739 | orchestrator |  } 2026-03-09 00:43:27.501750 | orchestrator |  }, 2026-03-09 00:43:27.501761 | orchestrator |  "lvm_volumes": [ 2026-03-09 00:43:27.501782 | orchestrator |  { 2026-03-09 00:43:27.501793 | orchestrator |  "data": "osd-block-9c74837a-43e3-5ea9-9fe0-5cec11260b17", 2026-03-09 00:43:27.501803 | orchestrator |  "data_vg": "ceph-9c74837a-43e3-5ea9-9fe0-5cec11260b17" 2026-03-09 00:43:27.501813 | orchestrator |  }, 2026-03-09 00:43:27.501823 | orchestrator |  { 2026-03-09 00:43:27.501833 | orchestrator |  "data": "osd-block-590958f1-5006-5da8-896c-bdb08f0ac33f", 2026-03-09 00:43:27.501843 | orchestrator |  "data_vg": "ceph-590958f1-5006-5da8-896c-bdb08f0ac33f" 2026-03-09 00:43:27.501854 | orchestrator |  } 2026-03-09 00:43:27.501864 | orchestrator |  ] 2026-03-09 00:43:27.501874 | orchestrator |  } 2026-03-09 00:43:27.501884 | orchestrator | } 2026-03-09 00:43:27.501894 | orchestrator | 2026-03-09 00:43:27.501905 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-09 00:43:27.501915 | orchestrator | Monday 09 March 2026 00:43:24 +0000 (0:00:00.272) 0:00:28.259 ********** 2026-03-09 00:43:27.501926 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-09 00:43:27.501936 | orchestrator | 2026-03-09 00:43:27.501958 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-09 00:43:27.501970 | orchestrator | 2026-03-09 00:43:27.501981 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-09 00:43:27.501991 | orchestrator | Monday 09 March 2026 00:43:26 +0000 (0:00:01.615) 0:00:29.876 ********** 2026-03-09 00:43:27.502114 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-09 00:43:27.502129 | orchestrator | 2026-03-09 00:43:27.502139 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-09 00:43:27.502149 | orchestrator | Monday 09 March 2026 00:43:26 +0000 (0:00:00.781) 0:00:30.658 ********** 2026-03-09 00:43:27.502160 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:43:27.502170 | orchestrator | 2026-03-09 00:43:27.502181 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:27.502190 | orchestrator | Monday 09 March 2026 00:43:27 +0000 (0:00:00.239) 0:00:30.897 ********** 2026-03-09 00:43:27.502201 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-09 00:43:27.502211 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-09 00:43:27.502222 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-09 00:43:27.502232 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-09 00:43:27.502242 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-09 00:43:27.502268 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-09 00:43:36.548965 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-09 00:43:36.549048 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-09 00:43:36.549057 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-09 00:43:36.549064 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-09 00:43:36.549071 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-09 00:43:36.549078 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-09 00:43:36.549084 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-09 00:43:36.549090 | orchestrator | 2026-03-09 00:43:36.549098 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:36.549105 | orchestrator | Monday 09 March 2026 00:43:27 +0000 (0:00:00.416) 0:00:31.313 ********** 2026-03-09 00:43:36.549111 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:43:36.549118 | orchestrator | 2026-03-09 00:43:36.549126 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:36.549137 | orchestrator | Monday 09 March 2026 00:43:27 +0000 (0:00:00.245) 0:00:31.559 ********** 2026-03-09 00:43:36.549146 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:43:36.549157 | orchestrator | 2026-03-09 00:43:36.549167 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:36.549177 | orchestrator | Monday 09 March 2026 00:43:28 +0000 (0:00:00.247) 0:00:31.806 ********** 2026-03-09 00:43:36.549187 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:43:36.549197 | orchestrator | 2026-03-09 00:43:36.549206 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:36.549216 | orchestrator | Monday 09 March 2026 00:43:28 +0000 (0:00:00.184) 0:00:31.991 ********** 2026-03-09 00:43:36.549227 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:43:36.549237 | orchestrator | 2026-03-09 00:43:36.549247 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:36.549257 | orchestrator | Monday 09 March 2026 00:43:28 +0000 (0:00:00.206) 0:00:32.197 ********** 2026-03-09 00:43:36.549291 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:43:36.549303 | orchestrator | 2026-03-09 00:43:36.549313 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:36.549323 | orchestrator | Monday 09 March 2026 00:43:28 +0000 (0:00:00.205) 0:00:32.403 ********** 2026-03-09 00:43:36.549333 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:43:36.549343 | orchestrator | 2026-03-09 00:43:36.549353 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:36.549363 | orchestrator | Monday 09 March 2026 00:43:28 +0000 (0:00:00.192) 0:00:32.595 ********** 2026-03-09 00:43:36.549373 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:43:36.549380 | orchestrator | 2026-03-09 00:43:36.549387 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:36.549393 | orchestrator | Monday 09 March 2026 00:43:29 +0000 (0:00:00.203) 0:00:32.799 ********** 2026-03-09 00:43:36.549399 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:43:36.549405 | orchestrator | 2026-03-09 00:43:36.549411 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:36.549418 | orchestrator | Monday 09 March 2026 00:43:29 +0000 (0:00:00.225) 0:00:33.025 ********** 2026-03-09 00:43:36.549428 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847) 2026-03-09 00:43:36.549440 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847) 2026-03-09 00:43:36.549450 | orchestrator | 2026-03-09 00:43:36.549459 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:36.549469 | orchestrator | Monday 09 March 2026 00:43:30 +0000 (0:00:00.925) 0:00:33.951 ********** 2026-03-09 00:43:36.549494 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_af02e055-7e15-40a4-be69-d990d822f0ba) 2026-03-09 00:43:36.549579 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_af02e055-7e15-40a4-be69-d990d822f0ba) 2026-03-09 00:43:36.549590 | orchestrator | 2026-03-09 00:43:36.549597 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:36.549604 | orchestrator | Monday 09 March 2026 00:43:30 +0000 (0:00:00.449) 0:00:34.400 ********** 2026-03-09 00:43:36.549612 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9db61a68-6a19-4ffe-9dc6-6109c8ad90ec) 2026-03-09 00:43:36.549619 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9db61a68-6a19-4ffe-9dc6-6109c8ad90ec) 2026-03-09 00:43:36.549626 | orchestrator | 2026-03-09 00:43:36.549634 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:36.549641 | orchestrator | Monday 09 March 2026 00:43:31 +0000 (0:00:00.465) 0:00:34.866 ********** 2026-03-09 00:43:36.549648 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5e6a3ca4-1946-4dac-9dc1-38bfb1214560) 2026-03-09 00:43:36.549656 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5e6a3ca4-1946-4dac-9dc1-38bfb1214560) 2026-03-09 00:43:36.549663 | orchestrator | 2026-03-09 00:43:36.549670 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:36.549678 | orchestrator | Monday 09 March 2026 00:43:31 +0000 (0:00:00.509) 0:00:35.375 ********** 2026-03-09 00:43:36.549685 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-09 00:43:36.549692 | orchestrator | 2026-03-09 00:43:36.549699 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:36.549723 | orchestrator | Monday 09 March 2026 00:43:32 +0000 (0:00:00.371) 0:00:35.747 ********** 2026-03-09 00:43:36.549731 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-09 00:43:36.549738 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-09 00:43:36.549746 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-09 00:43:36.549753 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-09 00:43:36.549767 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-09 00:43:36.549774 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-09 00:43:36.549781 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-09 00:43:36.549788 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-09 00:43:36.549795 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-09 00:43:36.549802 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-09 00:43:36.549809 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-09 00:43:36.549816 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-09 00:43:36.549823 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-09 00:43:36.549830 | orchestrator | 2026-03-09 00:43:36.549837 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:36.549844 | orchestrator | Monday 09 March 2026 00:43:32 +0000 (0:00:00.511) 0:00:36.258 ********** 2026-03-09 00:43:36.549851 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:43:36.549858 | orchestrator | 2026-03-09 00:43:36.549866 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:36.549873 | orchestrator | Monday 09 March 2026 00:43:32 +0000 (0:00:00.285) 0:00:36.543 ********** 2026-03-09 00:43:36.549880 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:43:36.549888 | orchestrator | 2026-03-09 00:43:36.549895 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:36.549901 | orchestrator | Monday 09 March 2026 00:43:33 +0000 (0:00:00.254) 0:00:36.798 ********** 2026-03-09 00:43:36.549907 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:43:36.549913 | orchestrator | 2026-03-09 00:43:36.549919 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:36.549925 | orchestrator | Monday 09 March 2026 00:43:33 +0000 (0:00:00.285) 0:00:37.083 ********** 2026-03-09 00:43:36.549931 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:43:36.549938 | orchestrator | 2026-03-09 00:43:36.549944 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:36.549950 | orchestrator | Monday 09 March 2026 00:43:33 +0000 (0:00:00.177) 0:00:37.261 ********** 2026-03-09 00:43:36.549956 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:43:36.549962 | orchestrator | 2026-03-09 00:43:36.549968 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:36.549974 | orchestrator | Monday 09 March 2026 00:43:33 +0000 (0:00:00.204) 0:00:37.465 ********** 2026-03-09 00:43:36.549980 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:43:36.549986 | orchestrator | 2026-03-09 00:43:36.549992 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:36.549998 | orchestrator | Monday 09 March 2026 00:43:34 +0000 (0:00:00.711) 0:00:38.176 ********** 2026-03-09 00:43:36.550004 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:43:36.550011 | orchestrator | 2026-03-09 00:43:36.550064 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:36.550071 | orchestrator | Monday 09 March 2026 00:43:34 +0000 (0:00:00.236) 0:00:38.413 ********** 2026-03-09 00:43:36.550077 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:43:36.550083 | orchestrator | 2026-03-09 00:43:36.550089 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:36.550095 | orchestrator | Monday 09 March 2026 00:43:34 +0000 (0:00:00.232) 0:00:38.646 ********** 2026-03-09 00:43:36.550101 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-09 00:43:36.550122 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-09 00:43:36.550128 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-09 00:43:36.550135 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-09 00:43:36.550141 | orchestrator | 2026-03-09 00:43:36.550147 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:36.550153 | orchestrator | Monday 09 March 2026 00:43:35 +0000 (0:00:00.669) 0:00:39.316 ********** 2026-03-09 00:43:36.550159 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:43:36.550165 | orchestrator | 2026-03-09 00:43:36.550171 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:36.550178 | orchestrator | Monday 09 March 2026 00:43:35 +0000 (0:00:00.252) 0:00:39.568 ********** 2026-03-09 00:43:36.550184 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:43:36.550190 | orchestrator | 2026-03-09 00:43:36.550196 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:36.550202 | orchestrator | Monday 09 March 2026 00:43:36 +0000 (0:00:00.236) 0:00:39.805 ********** 2026-03-09 00:43:36.550208 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:43:36.550214 | orchestrator | 2026-03-09 00:43:36.550220 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:36.550226 | orchestrator | Monday 09 March 2026 00:43:36 +0000 (0:00:00.234) 0:00:40.039 ********** 2026-03-09 00:43:36.550232 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:43:36.550238 | orchestrator | 2026-03-09 00:43:36.550249 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-09 00:43:41.058617 | orchestrator | Monday 09 March 2026 00:43:36 +0000 (0:00:00.234) 0:00:40.274 ********** 2026-03-09 00:43:41.058733 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-03-09 00:43:41.058761 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-03-09 00:43:41.058780 | orchestrator | 2026-03-09 00:43:41.058800 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-09 00:43:41.058818 | orchestrator | Monday 09 March 2026 00:43:36 +0000 (0:00:00.192) 0:00:40.466 ********** 2026-03-09 00:43:41.058839 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:43:41.058859 | orchestrator | 2026-03-09 00:43:41.058877 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-09 00:43:41.058894 | orchestrator | Monday 09 March 2026 00:43:36 +0000 (0:00:00.127) 0:00:40.594 ********** 2026-03-09 00:43:41.058923 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:43:41.058936 | orchestrator | 2026-03-09 00:43:41.058947 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-09 00:43:41.058958 | orchestrator | Monday 09 March 2026 00:43:37 +0000 (0:00:00.138) 0:00:40.733 ********** 2026-03-09 00:43:41.058969 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:43:41.058980 | orchestrator | 2026-03-09 00:43:41.058992 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-09 00:43:41.059003 | orchestrator | Monday 09 March 2026 00:43:37 +0000 (0:00:00.356) 0:00:41.089 ********** 2026-03-09 00:43:41.059014 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:43:41.059026 | orchestrator | 2026-03-09 00:43:41.059037 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-09 00:43:41.059048 | orchestrator | Monday 09 March 2026 00:43:37 +0000 (0:00:00.144) 0:00:41.234 ********** 2026-03-09 00:43:41.059059 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e95d8336-562c-5e60-938c-e1db43f5f553'}}) 2026-03-09 00:43:41.059075 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c56389c1-f3b1-5ba6-b160-f425a16b3e47'}}) 2026-03-09 00:43:41.059087 | orchestrator | 2026-03-09 00:43:41.059098 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-09 00:43:41.059109 | orchestrator | Monday 09 March 2026 00:43:37 +0000 (0:00:00.186) 0:00:41.420 ********** 2026-03-09 00:43:41.059123 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e95d8336-562c-5e60-938c-e1db43f5f553'}})  2026-03-09 00:43:41.059159 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c56389c1-f3b1-5ba6-b160-f425a16b3e47'}})  2026-03-09 00:43:41.059173 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:43:41.059186 | orchestrator | 2026-03-09 00:43:41.059199 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-09 00:43:41.059212 | orchestrator | Monday 09 March 2026 00:43:37 +0000 (0:00:00.179) 0:00:41.600 ********** 2026-03-09 00:43:41.059225 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e95d8336-562c-5e60-938c-e1db43f5f553'}})  2026-03-09 00:43:41.059238 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c56389c1-f3b1-5ba6-b160-f425a16b3e47'}})  2026-03-09 00:43:41.059251 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:43:41.059270 | orchestrator | 2026-03-09 00:43:41.059289 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-09 00:43:41.059308 | orchestrator | Monday 09 March 2026 00:43:38 +0000 (0:00:00.176) 0:00:41.776 ********** 2026-03-09 00:43:41.059327 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e95d8336-562c-5e60-938c-e1db43f5f553'}})  2026-03-09 00:43:41.059348 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c56389c1-f3b1-5ba6-b160-f425a16b3e47'}})  2026-03-09 00:43:41.059367 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:43:41.059386 | orchestrator | 2026-03-09 00:43:41.059403 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-09 00:43:41.059416 | orchestrator | Monday 09 March 2026 00:43:38 +0000 (0:00:00.160) 0:00:41.936 ********** 2026-03-09 00:43:41.059429 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:43:41.059442 | orchestrator | 2026-03-09 00:43:41.059454 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-09 00:43:41.059465 | orchestrator | Monday 09 March 2026 00:43:38 +0000 (0:00:00.147) 0:00:42.084 ********** 2026-03-09 00:43:41.059476 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:43:41.059487 | orchestrator | 2026-03-09 00:43:41.059497 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-09 00:43:41.059540 | orchestrator | Monday 09 March 2026 00:43:38 +0000 (0:00:00.149) 0:00:42.234 ********** 2026-03-09 00:43:41.059561 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:43:41.059579 | orchestrator | 2026-03-09 00:43:41.059597 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-09 00:43:41.059609 | orchestrator | Monday 09 March 2026 00:43:38 +0000 (0:00:00.131) 0:00:42.365 ********** 2026-03-09 00:43:41.059620 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:43:41.059631 | orchestrator | 2026-03-09 00:43:41.059641 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-09 00:43:41.059652 | orchestrator | Monday 09 March 2026 00:43:38 +0000 (0:00:00.153) 0:00:42.518 ********** 2026-03-09 00:43:41.059663 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:43:41.059674 | orchestrator | 2026-03-09 00:43:41.059685 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-09 00:43:41.059696 | orchestrator | Monday 09 March 2026 00:43:38 +0000 (0:00:00.139) 0:00:42.658 ********** 2026-03-09 00:43:41.059707 | orchestrator | ok: [testbed-node-5] => { 2026-03-09 00:43:41.059718 | orchestrator |  "ceph_osd_devices": { 2026-03-09 00:43:41.059729 | orchestrator |  "sdb": { 2026-03-09 00:43:41.059761 | orchestrator |  "osd_lvm_uuid": "e95d8336-562c-5e60-938c-e1db43f5f553" 2026-03-09 00:43:41.059773 | orchestrator |  }, 2026-03-09 00:43:41.059784 | orchestrator |  "sdc": { 2026-03-09 00:43:41.059796 | orchestrator |  "osd_lvm_uuid": "c56389c1-f3b1-5ba6-b160-f425a16b3e47" 2026-03-09 00:43:41.059807 | orchestrator |  } 2026-03-09 00:43:41.059817 | orchestrator |  } 2026-03-09 00:43:41.059829 | orchestrator | } 2026-03-09 00:43:41.059840 | orchestrator | 2026-03-09 00:43:41.059860 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-09 00:43:41.059871 | orchestrator | Monday 09 March 2026 00:43:39 +0000 (0:00:00.137) 0:00:42.795 ********** 2026-03-09 00:43:41.059882 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:43:41.059893 | orchestrator | 2026-03-09 00:43:41.059904 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-09 00:43:41.059914 | orchestrator | Monday 09 March 2026 00:43:39 +0000 (0:00:00.143) 0:00:42.938 ********** 2026-03-09 00:43:41.059925 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:43:41.059936 | orchestrator | 2026-03-09 00:43:41.059947 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-09 00:43:41.059957 | orchestrator | Monday 09 March 2026 00:43:39 +0000 (0:00:00.371) 0:00:43.310 ********** 2026-03-09 00:43:41.059968 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:43:41.059978 | orchestrator | 2026-03-09 00:43:41.059989 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-09 00:43:41.060000 | orchestrator | Monday 09 March 2026 00:43:39 +0000 (0:00:00.137) 0:00:43.448 ********** 2026-03-09 00:43:41.060010 | orchestrator | changed: [testbed-node-5] => { 2026-03-09 00:43:41.060021 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-09 00:43:41.060033 | orchestrator |  "ceph_osd_devices": { 2026-03-09 00:43:41.060044 | orchestrator |  "sdb": { 2026-03-09 00:43:41.060055 | orchestrator |  "osd_lvm_uuid": "e95d8336-562c-5e60-938c-e1db43f5f553" 2026-03-09 00:43:41.060066 | orchestrator |  }, 2026-03-09 00:43:41.060077 | orchestrator |  "sdc": { 2026-03-09 00:43:41.060088 | orchestrator |  "osd_lvm_uuid": "c56389c1-f3b1-5ba6-b160-f425a16b3e47" 2026-03-09 00:43:41.060099 | orchestrator |  } 2026-03-09 00:43:41.060109 | orchestrator |  }, 2026-03-09 00:43:41.060121 | orchestrator |  "lvm_volumes": [ 2026-03-09 00:43:41.060131 | orchestrator |  { 2026-03-09 00:43:41.060143 | orchestrator |  "data": "osd-block-e95d8336-562c-5e60-938c-e1db43f5f553", 2026-03-09 00:43:41.060153 | orchestrator |  "data_vg": "ceph-e95d8336-562c-5e60-938c-e1db43f5f553" 2026-03-09 00:43:41.060164 | orchestrator |  }, 2026-03-09 00:43:41.060180 | orchestrator |  { 2026-03-09 00:43:41.060191 | orchestrator |  "data": "osd-block-c56389c1-f3b1-5ba6-b160-f425a16b3e47", 2026-03-09 00:43:41.060202 | orchestrator |  "data_vg": "ceph-c56389c1-f3b1-5ba6-b160-f425a16b3e47" 2026-03-09 00:43:41.060213 | orchestrator |  } 2026-03-09 00:43:41.060224 | orchestrator |  ] 2026-03-09 00:43:41.060235 | orchestrator |  } 2026-03-09 00:43:41.060246 | orchestrator | } 2026-03-09 00:43:41.060257 | orchestrator | 2026-03-09 00:43:41.060268 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-09 00:43:41.060278 | orchestrator | Monday 09 March 2026 00:43:39 +0000 (0:00:00.229) 0:00:43.678 ********** 2026-03-09 00:43:41.060289 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-09 00:43:41.060300 | orchestrator | 2026-03-09 00:43:41.060318 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:43:41.060337 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-09 00:43:41.060390 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-09 00:43:41.060408 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-09 00:43:41.060419 | orchestrator | 2026-03-09 00:43:41.060430 | orchestrator | 2026-03-09 00:43:41.060441 | orchestrator | 2026-03-09 00:43:41.060452 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:43:41.060462 | orchestrator | Monday 09 March 2026 00:43:41 +0000 (0:00:01.091) 0:00:44.770 ********** 2026-03-09 00:43:41.060482 | orchestrator | =============================================================================== 2026-03-09 00:43:41.060493 | orchestrator | Write configuration file ------------------------------------------------ 4.22s 2026-03-09 00:43:41.060578 | orchestrator | Add known partitions to the list of available block devices ------------- 1.37s 2026-03-09 00:43:41.060609 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.26s 2026-03-09 00:43:41.060621 | orchestrator | Add known links to the list of available block devices ------------------ 1.26s 2026-03-09 00:43:41.060632 | orchestrator | Add known partitions to the list of available block devices ------------- 1.09s 2026-03-09 00:43:41.060641 | orchestrator | Add known links to the list of available block devices ------------------ 0.93s 2026-03-09 00:43:41.060651 | orchestrator | Add known links to the list of available block devices ------------------ 0.93s 2026-03-09 00:43:41.060660 | orchestrator | Add known partitions to the list of available block devices ------------- 0.88s 2026-03-09 00:43:41.060670 | orchestrator | Print configuration data ------------------------------------------------ 0.83s 2026-03-09 00:43:41.060679 | orchestrator | Add known links to the list of available block devices ------------------ 0.75s 2026-03-09 00:43:41.060689 | orchestrator | Add known partitions to the list of available block devices ------------- 0.73s 2026-03-09 00:43:41.060699 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.72s 2026-03-09 00:43:41.060708 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2026-03-09 00:43:41.060727 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2026-03-09 00:43:41.429917 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2026-03-09 00:43:41.430066 | orchestrator | Get initial list of available block devices ----------------------------- 0.67s 2026-03-09 00:43:41.430086 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2026-03-09 00:43:41.430102 | orchestrator | Print DB devices -------------------------------------------------------- 0.66s 2026-03-09 00:43:41.430113 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2026-03-09 00:43:41.430124 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.65s 2026-03-09 00:44:04.243348 | orchestrator | 2026-03-09 00:44:04 | INFO  | Task 68ae1655-d8ca-4887-9e92-16775ba66c94 (sync inventory) is running in background. Output coming soon. 2026-03-09 00:44:33.673531 | orchestrator | 2026-03-09 00:44:05 | INFO  | Starting group_vars file reorganization 2026-03-09 00:44:33.673645 | orchestrator | 2026-03-09 00:44:05 | INFO  | Moved 0 file(s) to their respective directories 2026-03-09 00:44:33.673673 | orchestrator | 2026-03-09 00:44:05 | INFO  | Group_vars file reorganization completed 2026-03-09 00:44:33.673692 | orchestrator | 2026-03-09 00:44:08 | INFO  | Starting variable preparation from inventory 2026-03-09 00:44:33.673711 | orchestrator | 2026-03-09 00:44:11 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-09 00:44:33.673731 | orchestrator | 2026-03-09 00:44:11 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-09 00:44:33.673773 | orchestrator | 2026-03-09 00:44:11 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-09 00:44:33.673792 | orchestrator | 2026-03-09 00:44:11 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-09 00:44:33.673810 | orchestrator | 2026-03-09 00:44:11 | INFO  | Variable preparation completed 2026-03-09 00:44:33.673828 | orchestrator | 2026-03-09 00:44:13 | INFO  | Starting inventory overwrite handling 2026-03-09 00:44:33.673846 | orchestrator | 2026-03-09 00:44:13 | INFO  | Handling group overwrites in 99-overwrite 2026-03-09 00:44:33.673865 | orchestrator | 2026-03-09 00:44:13 | INFO  | Removing group frr:children from 60-generic 2026-03-09 00:44:33.673912 | orchestrator | 2026-03-09 00:44:13 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-09 00:44:33.673931 | orchestrator | 2026-03-09 00:44:13 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-09 00:44:33.673949 | orchestrator | 2026-03-09 00:44:13 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-09 00:44:33.673968 | orchestrator | 2026-03-09 00:44:13 | INFO  | Handling group overwrites in 20-roles 2026-03-09 00:44:33.673986 | orchestrator | 2026-03-09 00:44:13 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-09 00:44:33.674005 | orchestrator | 2026-03-09 00:44:13 | INFO  | Removed 5 group(s) in total 2026-03-09 00:44:33.674082 | orchestrator | 2026-03-09 00:44:13 | INFO  | Inventory overwrite handling completed 2026-03-09 00:44:33.674097 | orchestrator | 2026-03-09 00:44:14 | INFO  | Starting merge of inventory files 2026-03-09 00:44:33.674110 | orchestrator | 2026-03-09 00:44:14 | INFO  | Inventory files merged successfully 2026-03-09 00:44:33.674122 | orchestrator | 2026-03-09 00:44:19 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-09 00:44:33.674135 | orchestrator | 2026-03-09 00:44:32 | INFO  | Successfully wrote ClusterShell configuration 2026-03-09 00:44:33.674148 | orchestrator | [master 5473eba] 2026-03-09-00-44 2026-03-09 00:44:33.674162 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-03-09 00:44:35.812743 | orchestrator | 2026-03-09 00:44:35 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-03-09 00:44:35.882301 | orchestrator | 2026-03-09 00:44:35 | INFO  | Task 5fac4aa9-02ef-4f99-823a-011fe987134f (ceph-create-lvm-devices) was prepared for execution. 2026-03-09 00:44:35.882381 | orchestrator | 2026-03-09 00:44:35 | INFO  | It takes a moment until task 5fac4aa9-02ef-4f99-823a-011fe987134f (ceph-create-lvm-devices) has been started and output is visible here. 2026-03-09 00:44:48.224216 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-09 00:44:48.224341 | orchestrator | 2.16.14 2026-03-09 00:44:48.224356 | orchestrator | 2026-03-09 00:44:48.224365 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-09 00:44:48.224374 | orchestrator | 2026-03-09 00:44:48.224403 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-09 00:44:48.224412 | orchestrator | Monday 09 March 2026 00:44:40 +0000 (0:00:00.278) 0:00:00.278 ********** 2026-03-09 00:44:48.224420 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-09 00:44:48.224429 | orchestrator | 2026-03-09 00:44:48.224436 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-09 00:44:48.224443 | orchestrator | Monday 09 March 2026 00:44:40 +0000 (0:00:00.245) 0:00:00.523 ********** 2026-03-09 00:44:48.224451 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:44:48.224458 | orchestrator | 2026-03-09 00:44:48.224465 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:48.224491 | orchestrator | Monday 09 March 2026 00:44:40 +0000 (0:00:00.232) 0:00:00.755 ********** 2026-03-09 00:44:48.224581 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-09 00:44:48.224590 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-09 00:44:48.224597 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-09 00:44:48.224604 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-09 00:44:48.224611 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-09 00:44:48.224617 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-09 00:44:48.224624 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-09 00:44:48.224653 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-09 00:44:48.224660 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-09 00:44:48.224666 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-09 00:44:48.224673 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-09 00:44:48.224680 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-09 00:44:48.224687 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-09 00:44:48.224693 | orchestrator | 2026-03-09 00:44:48.224704 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:48.224712 | orchestrator | Monday 09 March 2026 00:44:41 +0000 (0:00:00.560) 0:00:01.316 ********** 2026-03-09 00:44:48.224719 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:48.224727 | orchestrator | 2026-03-09 00:44:48.224735 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:48.224743 | orchestrator | Monday 09 March 2026 00:44:41 +0000 (0:00:00.196) 0:00:01.512 ********** 2026-03-09 00:44:48.224751 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:48.224759 | orchestrator | 2026-03-09 00:44:48.224766 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:48.224774 | orchestrator | Monday 09 March 2026 00:44:41 +0000 (0:00:00.186) 0:00:01.698 ********** 2026-03-09 00:44:48.224782 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:48.224790 | orchestrator | 2026-03-09 00:44:48.224798 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:48.224806 | orchestrator | Monday 09 March 2026 00:44:41 +0000 (0:00:00.198) 0:00:01.896 ********** 2026-03-09 00:44:48.224813 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:48.224821 | orchestrator | 2026-03-09 00:44:48.224828 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:48.224836 | orchestrator | Monday 09 March 2026 00:44:42 +0000 (0:00:00.209) 0:00:02.106 ********** 2026-03-09 00:44:48.224844 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:48.224852 | orchestrator | 2026-03-09 00:44:48.224859 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:48.224884 | orchestrator | Monday 09 March 2026 00:44:42 +0000 (0:00:00.179) 0:00:02.286 ********** 2026-03-09 00:44:48.224893 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:48.224901 | orchestrator | 2026-03-09 00:44:48.224909 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:48.224916 | orchestrator | Monday 09 March 2026 00:44:42 +0000 (0:00:00.183) 0:00:02.470 ********** 2026-03-09 00:44:48.224923 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:48.224931 | orchestrator | 2026-03-09 00:44:48.224938 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:48.224946 | orchestrator | Monday 09 March 2026 00:44:42 +0000 (0:00:00.193) 0:00:02.663 ********** 2026-03-09 00:44:48.224953 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:48.224960 | orchestrator | 2026-03-09 00:44:48.224966 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:48.224973 | orchestrator | Monday 09 March 2026 00:44:42 +0000 (0:00:00.171) 0:00:02.835 ********** 2026-03-09 00:44:48.224979 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d) 2026-03-09 00:44:48.224987 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d) 2026-03-09 00:44:48.224995 | orchestrator | 2026-03-09 00:44:48.225002 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:48.225027 | orchestrator | Monday 09 March 2026 00:44:43 +0000 (0:00:00.430) 0:00:03.266 ********** 2026-03-09 00:44:48.225043 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_a6833780-5d8c-49cb-baf4-596d7658d284) 2026-03-09 00:44:48.225051 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_a6833780-5d8c-49cb-baf4-596d7658d284) 2026-03-09 00:44:48.225059 | orchestrator | 2026-03-09 00:44:48.225066 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:48.225074 | orchestrator | Monday 09 March 2026 00:44:43 +0000 (0:00:00.724) 0:00:03.990 ********** 2026-03-09 00:44:48.225081 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d782a267-8601-4e70-9eb9-845bf96c3393) 2026-03-09 00:44:48.225089 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d782a267-8601-4e70-9eb9-845bf96c3393) 2026-03-09 00:44:48.225095 | orchestrator | 2026-03-09 00:44:48.225102 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:48.225108 | orchestrator | Monday 09 March 2026 00:44:44 +0000 (0:00:00.737) 0:00:04.727 ********** 2026-03-09 00:44:48.225115 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e401ede7-34f1-42e1-9654-8299af9dca9f) 2026-03-09 00:44:48.225122 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e401ede7-34f1-42e1-9654-8299af9dca9f) 2026-03-09 00:44:48.225129 | orchestrator | 2026-03-09 00:44:48.225135 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:48.225142 | orchestrator | Monday 09 March 2026 00:44:45 +0000 (0:00:01.042) 0:00:05.769 ********** 2026-03-09 00:44:48.225149 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-09 00:44:48.225156 | orchestrator | 2026-03-09 00:44:48.225162 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:48.225169 | orchestrator | Monday 09 March 2026 00:44:46 +0000 (0:00:00.354) 0:00:06.124 ********** 2026-03-09 00:44:48.225176 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-09 00:44:48.225183 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-09 00:44:48.225189 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-09 00:44:48.225195 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-09 00:44:48.225205 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-09 00:44:48.225218 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-09 00:44:48.225225 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-09 00:44:48.225231 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-09 00:44:48.225238 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-09 00:44:48.225245 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-09 00:44:48.225250 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-09 00:44:48.225256 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-09 00:44:48.225262 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-09 00:44:48.225268 | orchestrator | 2026-03-09 00:44:48.225275 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:48.225282 | orchestrator | Monday 09 March 2026 00:44:46 +0000 (0:00:00.454) 0:00:06.579 ********** 2026-03-09 00:44:48.225288 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:48.225295 | orchestrator | 2026-03-09 00:44:48.225302 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:48.225309 | orchestrator | Monday 09 March 2026 00:44:46 +0000 (0:00:00.193) 0:00:06.772 ********** 2026-03-09 00:44:48.225320 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:48.225327 | orchestrator | 2026-03-09 00:44:48.225334 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:48.225340 | orchestrator | Monday 09 March 2026 00:44:46 +0000 (0:00:00.214) 0:00:06.986 ********** 2026-03-09 00:44:48.225347 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:48.225353 | orchestrator | 2026-03-09 00:44:48.225360 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:48.225366 | orchestrator | Monday 09 March 2026 00:44:47 +0000 (0:00:00.246) 0:00:07.233 ********** 2026-03-09 00:44:48.225373 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:48.225380 | orchestrator | 2026-03-09 00:44:48.225386 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:48.225393 | orchestrator | Monday 09 March 2026 00:44:47 +0000 (0:00:00.273) 0:00:07.506 ********** 2026-03-09 00:44:48.225400 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:48.225406 | orchestrator | 2026-03-09 00:44:48.225412 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:48.225419 | orchestrator | Monday 09 March 2026 00:44:47 +0000 (0:00:00.240) 0:00:07.746 ********** 2026-03-09 00:44:48.225426 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:48.225432 | orchestrator | 2026-03-09 00:44:48.225439 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:48.225446 | orchestrator | Monday 09 March 2026 00:44:47 +0000 (0:00:00.299) 0:00:08.046 ********** 2026-03-09 00:44:48.225453 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:48.225460 | orchestrator | 2026-03-09 00:44:48.225472 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:57.140724 | orchestrator | Monday 09 March 2026 00:44:48 +0000 (0:00:00.251) 0:00:08.298 ********** 2026-03-09 00:44:57.140819 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:57.140832 | orchestrator | 2026-03-09 00:44:57.140840 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:57.140848 | orchestrator | Monday 09 March 2026 00:44:48 +0000 (0:00:00.259) 0:00:08.557 ********** 2026-03-09 00:44:57.140855 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-09 00:44:57.140863 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-09 00:44:57.140869 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-09 00:44:57.140876 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-09 00:44:57.140884 | orchestrator | 2026-03-09 00:44:57.140890 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:57.140897 | orchestrator | Monday 09 March 2026 00:44:49 +0000 (0:00:01.294) 0:00:09.852 ********** 2026-03-09 00:44:57.140904 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:57.140911 | orchestrator | 2026-03-09 00:44:57.140918 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:57.140925 | orchestrator | Monday 09 March 2026 00:44:50 +0000 (0:00:00.315) 0:00:10.168 ********** 2026-03-09 00:44:57.140931 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:57.140937 | orchestrator | 2026-03-09 00:44:57.140944 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:57.140951 | orchestrator | Monday 09 March 2026 00:44:50 +0000 (0:00:00.257) 0:00:10.425 ********** 2026-03-09 00:44:57.140957 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:57.140963 | orchestrator | 2026-03-09 00:44:57.140969 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:57.140975 | orchestrator | Monday 09 March 2026 00:44:50 +0000 (0:00:00.242) 0:00:10.668 ********** 2026-03-09 00:44:57.140981 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:57.140987 | orchestrator | 2026-03-09 00:44:57.140994 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-09 00:44:57.141001 | orchestrator | Monday 09 March 2026 00:44:50 +0000 (0:00:00.273) 0:00:10.941 ********** 2026-03-09 00:44:57.141008 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:57.141034 | orchestrator | 2026-03-09 00:44:57.141041 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-09 00:44:57.141047 | orchestrator | Monday 09 March 2026 00:44:51 +0000 (0:00:00.174) 0:00:11.116 ********** 2026-03-09 00:44:57.141055 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0b4a24c5-7164-5e55-92cc-433a48be10d0'}}) 2026-03-09 00:44:57.141063 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '07cae8b8-d309-58e5-9f3f-3806cd3fe573'}}) 2026-03-09 00:44:57.141070 | orchestrator | 2026-03-09 00:44:57.141078 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-09 00:44:57.141084 | orchestrator | Monday 09 March 2026 00:44:51 +0000 (0:00:00.238) 0:00:11.355 ********** 2026-03-09 00:44:57.141093 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-0b4a24c5-7164-5e55-92cc-433a48be10d0', 'data_vg': 'ceph-0b4a24c5-7164-5e55-92cc-433a48be10d0'}) 2026-03-09 00:44:57.141101 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-07cae8b8-d309-58e5-9f3f-3806cd3fe573', 'data_vg': 'ceph-07cae8b8-d309-58e5-9f3f-3806cd3fe573'}) 2026-03-09 00:44:57.141108 | orchestrator | 2026-03-09 00:44:57.141115 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-09 00:44:57.141122 | orchestrator | Monday 09 March 2026 00:44:53 +0000 (0:00:02.144) 0:00:13.500 ********** 2026-03-09 00:44:57.141129 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b4a24c5-7164-5e55-92cc-433a48be10d0', 'data_vg': 'ceph-0b4a24c5-7164-5e55-92cc-433a48be10d0'})  2026-03-09 00:44:57.141137 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-07cae8b8-d309-58e5-9f3f-3806cd3fe573', 'data_vg': 'ceph-07cae8b8-d309-58e5-9f3f-3806cd3fe573'})  2026-03-09 00:44:57.141144 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:57.141151 | orchestrator | 2026-03-09 00:44:57.141158 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-09 00:44:57.141165 | orchestrator | Monday 09 March 2026 00:44:53 +0000 (0:00:00.156) 0:00:13.656 ********** 2026-03-09 00:44:57.141172 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-0b4a24c5-7164-5e55-92cc-433a48be10d0', 'data_vg': 'ceph-0b4a24c5-7164-5e55-92cc-433a48be10d0'}) 2026-03-09 00:44:57.141179 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-07cae8b8-d309-58e5-9f3f-3806cd3fe573', 'data_vg': 'ceph-07cae8b8-d309-58e5-9f3f-3806cd3fe573'}) 2026-03-09 00:44:57.141186 | orchestrator | 2026-03-09 00:44:57.141208 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-09 00:44:57.141216 | orchestrator | Monday 09 March 2026 00:44:55 +0000 (0:00:01.482) 0:00:15.138 ********** 2026-03-09 00:44:57.141223 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b4a24c5-7164-5e55-92cc-433a48be10d0', 'data_vg': 'ceph-0b4a24c5-7164-5e55-92cc-433a48be10d0'})  2026-03-09 00:44:57.141230 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-07cae8b8-d309-58e5-9f3f-3806cd3fe573', 'data_vg': 'ceph-07cae8b8-d309-58e5-9f3f-3806cd3fe573'})  2026-03-09 00:44:57.141237 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:57.141244 | orchestrator | 2026-03-09 00:44:57.141251 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-09 00:44:57.141258 | orchestrator | Monday 09 March 2026 00:44:55 +0000 (0:00:00.163) 0:00:15.301 ********** 2026-03-09 00:44:57.141283 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:57.141292 | orchestrator | 2026-03-09 00:44:57.141299 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-09 00:44:57.141307 | orchestrator | Monday 09 March 2026 00:44:55 +0000 (0:00:00.143) 0:00:15.445 ********** 2026-03-09 00:44:57.141315 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b4a24c5-7164-5e55-92cc-433a48be10d0', 'data_vg': 'ceph-0b4a24c5-7164-5e55-92cc-433a48be10d0'})  2026-03-09 00:44:57.141323 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-07cae8b8-d309-58e5-9f3f-3806cd3fe573', 'data_vg': 'ceph-07cae8b8-d309-58e5-9f3f-3806cd3fe573'})  2026-03-09 00:44:57.141339 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:57.141347 | orchestrator | 2026-03-09 00:44:57.141355 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-09 00:44:57.141363 | orchestrator | Monday 09 March 2026 00:44:55 +0000 (0:00:00.380) 0:00:15.826 ********** 2026-03-09 00:44:57.141371 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:57.141379 | orchestrator | 2026-03-09 00:44:57.141387 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-09 00:44:57.141395 | orchestrator | Monday 09 March 2026 00:44:55 +0000 (0:00:00.149) 0:00:15.976 ********** 2026-03-09 00:44:57.141403 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b4a24c5-7164-5e55-92cc-433a48be10d0', 'data_vg': 'ceph-0b4a24c5-7164-5e55-92cc-433a48be10d0'})  2026-03-09 00:44:57.141411 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-07cae8b8-d309-58e5-9f3f-3806cd3fe573', 'data_vg': 'ceph-07cae8b8-d309-58e5-9f3f-3806cd3fe573'})  2026-03-09 00:44:57.141418 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:57.141426 | orchestrator | 2026-03-09 00:44:57.141435 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-09 00:44:57.141443 | orchestrator | Monday 09 March 2026 00:44:56 +0000 (0:00:00.169) 0:00:16.146 ********** 2026-03-09 00:44:57.141450 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:57.141457 | orchestrator | 2026-03-09 00:44:57.141466 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-09 00:44:57.141474 | orchestrator | Monday 09 March 2026 00:44:56 +0000 (0:00:00.132) 0:00:16.279 ********** 2026-03-09 00:44:57.141482 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b4a24c5-7164-5e55-92cc-433a48be10d0', 'data_vg': 'ceph-0b4a24c5-7164-5e55-92cc-433a48be10d0'})  2026-03-09 00:44:57.141531 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-07cae8b8-d309-58e5-9f3f-3806cd3fe573', 'data_vg': 'ceph-07cae8b8-d309-58e5-9f3f-3806cd3fe573'})  2026-03-09 00:44:57.141541 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:57.141549 | orchestrator | 2026-03-09 00:44:57.141557 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-09 00:44:57.141565 | orchestrator | Monday 09 March 2026 00:44:56 +0000 (0:00:00.169) 0:00:16.448 ********** 2026-03-09 00:44:57.141574 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:44:57.141582 | orchestrator | 2026-03-09 00:44:57.141589 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-09 00:44:57.141596 | orchestrator | Monday 09 March 2026 00:44:56 +0000 (0:00:00.142) 0:00:16.591 ********** 2026-03-09 00:44:57.141602 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b4a24c5-7164-5e55-92cc-433a48be10d0', 'data_vg': 'ceph-0b4a24c5-7164-5e55-92cc-433a48be10d0'})  2026-03-09 00:44:57.141609 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-07cae8b8-d309-58e5-9f3f-3806cd3fe573', 'data_vg': 'ceph-07cae8b8-d309-58e5-9f3f-3806cd3fe573'})  2026-03-09 00:44:57.141616 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:57.141624 | orchestrator | 2026-03-09 00:44:57.141632 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-09 00:44:57.141639 | orchestrator | Monday 09 March 2026 00:44:56 +0000 (0:00:00.154) 0:00:16.745 ********** 2026-03-09 00:44:57.141647 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b4a24c5-7164-5e55-92cc-433a48be10d0', 'data_vg': 'ceph-0b4a24c5-7164-5e55-92cc-433a48be10d0'})  2026-03-09 00:44:57.141654 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-07cae8b8-d309-58e5-9f3f-3806cd3fe573', 'data_vg': 'ceph-07cae8b8-d309-58e5-9f3f-3806cd3fe573'})  2026-03-09 00:44:57.141661 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:57.141669 | orchestrator | 2026-03-09 00:44:57.141676 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-09 00:44:57.141690 | orchestrator | Monday 09 March 2026 00:44:56 +0000 (0:00:00.154) 0:00:16.899 ********** 2026-03-09 00:44:57.141698 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b4a24c5-7164-5e55-92cc-433a48be10d0', 'data_vg': 'ceph-0b4a24c5-7164-5e55-92cc-433a48be10d0'})  2026-03-09 00:44:57.141705 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-07cae8b8-d309-58e5-9f3f-3806cd3fe573', 'data_vg': 'ceph-07cae8b8-d309-58e5-9f3f-3806cd3fe573'})  2026-03-09 00:44:57.141712 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:57.141719 | orchestrator | 2026-03-09 00:44:57.141727 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-09 00:44:57.141734 | orchestrator | Monday 09 March 2026 00:44:56 +0000 (0:00:00.177) 0:00:17.077 ********** 2026-03-09 00:44:57.141741 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:57.141748 | orchestrator | 2026-03-09 00:44:57.141756 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-09 00:44:57.141771 | orchestrator | Monday 09 March 2026 00:44:57 +0000 (0:00:00.143) 0:00:17.221 ********** 2026-03-09 00:45:03.920143 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:45:03.920271 | orchestrator | 2026-03-09 00:45:03.920289 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-09 00:45:03.920301 | orchestrator | Monday 09 March 2026 00:44:57 +0000 (0:00:00.146) 0:00:17.368 ********** 2026-03-09 00:45:03.920311 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:45:03.920321 | orchestrator | 2026-03-09 00:45:03.920331 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-09 00:45:03.920341 | orchestrator | Monday 09 March 2026 00:44:57 +0000 (0:00:00.139) 0:00:17.508 ********** 2026-03-09 00:45:03.920351 | orchestrator | ok: [testbed-node-3] => { 2026-03-09 00:45:03.920362 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-09 00:45:03.920372 | orchestrator | } 2026-03-09 00:45:03.920382 | orchestrator | 2026-03-09 00:45:03.920392 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-09 00:45:03.920402 | orchestrator | Monday 09 March 2026 00:44:57 +0000 (0:00:00.366) 0:00:17.874 ********** 2026-03-09 00:45:03.920411 | orchestrator | ok: [testbed-node-3] => { 2026-03-09 00:45:03.920422 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-09 00:45:03.920432 | orchestrator | } 2026-03-09 00:45:03.920442 | orchestrator | 2026-03-09 00:45:03.920452 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-09 00:45:03.920461 | orchestrator | Monday 09 March 2026 00:44:57 +0000 (0:00:00.150) 0:00:18.024 ********** 2026-03-09 00:45:03.920471 | orchestrator | ok: [testbed-node-3] => { 2026-03-09 00:45:03.920481 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-09 00:45:03.920491 | orchestrator | } 2026-03-09 00:45:03.920549 | orchestrator | 2026-03-09 00:45:03.920560 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-09 00:45:03.920569 | orchestrator | Monday 09 March 2026 00:44:58 +0000 (0:00:00.141) 0:00:18.165 ********** 2026-03-09 00:45:03.920579 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:45:03.920589 | orchestrator | 2026-03-09 00:45:03.920599 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-09 00:45:03.920609 | orchestrator | Monday 09 March 2026 00:44:58 +0000 (0:00:00.706) 0:00:18.872 ********** 2026-03-09 00:45:03.920619 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:45:03.920628 | orchestrator | 2026-03-09 00:45:03.920638 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-09 00:45:03.920648 | orchestrator | Monday 09 March 2026 00:44:59 +0000 (0:00:00.524) 0:00:19.396 ********** 2026-03-09 00:45:03.920658 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:45:03.920668 | orchestrator | 2026-03-09 00:45:03.920678 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-09 00:45:03.920690 | orchestrator | Monday 09 March 2026 00:44:59 +0000 (0:00:00.527) 0:00:19.924 ********** 2026-03-09 00:45:03.920701 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:45:03.920713 | orchestrator | 2026-03-09 00:45:03.920745 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-09 00:45:03.920757 | orchestrator | Monday 09 March 2026 00:45:00 +0000 (0:00:00.227) 0:00:20.152 ********** 2026-03-09 00:45:03.920768 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:45:03.920779 | orchestrator | 2026-03-09 00:45:03.920791 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-09 00:45:03.920803 | orchestrator | Monday 09 March 2026 00:45:00 +0000 (0:00:00.122) 0:00:20.274 ********** 2026-03-09 00:45:03.920826 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:45:03.920847 | orchestrator | 2026-03-09 00:45:03.920860 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-09 00:45:03.920871 | orchestrator | Monday 09 March 2026 00:45:00 +0000 (0:00:00.115) 0:00:20.389 ********** 2026-03-09 00:45:03.920883 | orchestrator | ok: [testbed-node-3] => { 2026-03-09 00:45:03.920895 | orchestrator |  "vgs_report": { 2026-03-09 00:45:03.920906 | orchestrator |  "vg": [] 2026-03-09 00:45:03.920916 | orchestrator |  } 2026-03-09 00:45:03.920926 | orchestrator | } 2026-03-09 00:45:03.920936 | orchestrator | 2026-03-09 00:45:03.920946 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-09 00:45:03.920956 | orchestrator | Monday 09 March 2026 00:45:00 +0000 (0:00:00.130) 0:00:20.519 ********** 2026-03-09 00:45:03.920965 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:45:03.920975 | orchestrator | 2026-03-09 00:45:03.920984 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-09 00:45:03.920994 | orchestrator | Monday 09 March 2026 00:45:00 +0000 (0:00:00.157) 0:00:20.676 ********** 2026-03-09 00:45:03.921003 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:45:03.921013 | orchestrator | 2026-03-09 00:45:03.921022 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-09 00:45:03.921032 | orchestrator | Monday 09 March 2026 00:45:00 +0000 (0:00:00.145) 0:00:20.822 ********** 2026-03-09 00:45:03.921041 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:45:03.921052 | orchestrator | 2026-03-09 00:45:03.921061 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-09 00:45:03.921071 | orchestrator | Monday 09 March 2026 00:45:01 +0000 (0:00:00.363) 0:00:21.186 ********** 2026-03-09 00:45:03.921080 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:45:03.921090 | orchestrator | 2026-03-09 00:45:03.921099 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-09 00:45:03.921109 | orchestrator | Monday 09 March 2026 00:45:01 +0000 (0:00:00.136) 0:00:21.323 ********** 2026-03-09 00:45:03.921118 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:45:03.921128 | orchestrator | 2026-03-09 00:45:03.921137 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-09 00:45:03.921147 | orchestrator | Monday 09 March 2026 00:45:01 +0000 (0:00:00.150) 0:00:21.473 ********** 2026-03-09 00:45:03.921156 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:45:03.921166 | orchestrator | 2026-03-09 00:45:03.921175 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-09 00:45:03.921185 | orchestrator | Monday 09 March 2026 00:45:01 +0000 (0:00:00.171) 0:00:21.644 ********** 2026-03-09 00:45:03.921194 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:45:03.921204 | orchestrator | 2026-03-09 00:45:03.921213 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-09 00:45:03.921223 | orchestrator | Monday 09 March 2026 00:45:01 +0000 (0:00:00.161) 0:00:21.806 ********** 2026-03-09 00:45:03.921248 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:45:03.921258 | orchestrator | 2026-03-09 00:45:03.921268 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-09 00:45:03.921277 | orchestrator | Monday 09 March 2026 00:45:01 +0000 (0:00:00.143) 0:00:21.949 ********** 2026-03-09 00:45:03.921287 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:45:03.921296 | orchestrator | 2026-03-09 00:45:03.921306 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-09 00:45:03.921323 | orchestrator | Monday 09 March 2026 00:45:02 +0000 (0:00:00.162) 0:00:22.112 ********** 2026-03-09 00:45:03.921333 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:45:03.921342 | orchestrator | 2026-03-09 00:45:03.921352 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-09 00:45:03.921362 | orchestrator | Monday 09 March 2026 00:45:02 +0000 (0:00:00.136) 0:00:22.248 ********** 2026-03-09 00:45:03.921371 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:45:03.921381 | orchestrator | 2026-03-09 00:45:03.921407 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-09 00:45:03.921417 | orchestrator | Monday 09 March 2026 00:45:02 +0000 (0:00:00.131) 0:00:22.380 ********** 2026-03-09 00:45:03.921427 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:45:03.921437 | orchestrator | 2026-03-09 00:45:03.921446 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-09 00:45:03.921456 | orchestrator | Monday 09 March 2026 00:45:02 +0000 (0:00:00.155) 0:00:22.535 ********** 2026-03-09 00:45:03.921465 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:45:03.921475 | orchestrator | 2026-03-09 00:45:03.921484 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-09 00:45:03.921515 | orchestrator | Monday 09 March 2026 00:45:02 +0000 (0:00:00.149) 0:00:22.685 ********** 2026-03-09 00:45:03.921525 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:45:03.921559 | orchestrator | 2026-03-09 00:45:03.921569 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-09 00:45:03.921579 | orchestrator | Monday 09 March 2026 00:45:02 +0000 (0:00:00.150) 0:00:22.835 ********** 2026-03-09 00:45:03.921590 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b4a24c5-7164-5e55-92cc-433a48be10d0', 'data_vg': 'ceph-0b4a24c5-7164-5e55-92cc-433a48be10d0'})  2026-03-09 00:45:03.921602 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-07cae8b8-d309-58e5-9f3f-3806cd3fe573', 'data_vg': 'ceph-07cae8b8-d309-58e5-9f3f-3806cd3fe573'})  2026-03-09 00:45:03.921612 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:45:03.921622 | orchestrator | 2026-03-09 00:45:03.921631 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-09 00:45:03.921646 | orchestrator | Monday 09 March 2026 00:45:03 +0000 (0:00:00.387) 0:00:23.223 ********** 2026-03-09 00:45:03.921656 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b4a24c5-7164-5e55-92cc-433a48be10d0', 'data_vg': 'ceph-0b4a24c5-7164-5e55-92cc-433a48be10d0'})  2026-03-09 00:45:03.921666 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-07cae8b8-d309-58e5-9f3f-3806cd3fe573', 'data_vg': 'ceph-07cae8b8-d309-58e5-9f3f-3806cd3fe573'})  2026-03-09 00:45:03.921676 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:45:03.921685 | orchestrator | 2026-03-09 00:45:03.921695 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-09 00:45:03.921705 | orchestrator | Monday 09 March 2026 00:45:03 +0000 (0:00:00.166) 0:00:23.389 ********** 2026-03-09 00:45:03.921714 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b4a24c5-7164-5e55-92cc-433a48be10d0', 'data_vg': 'ceph-0b4a24c5-7164-5e55-92cc-433a48be10d0'})  2026-03-09 00:45:03.921724 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-07cae8b8-d309-58e5-9f3f-3806cd3fe573', 'data_vg': 'ceph-07cae8b8-d309-58e5-9f3f-3806cd3fe573'})  2026-03-09 00:45:03.921734 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:45:03.921743 | orchestrator | 2026-03-09 00:45:03.921753 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-09 00:45:03.921763 | orchestrator | Monday 09 March 2026 00:45:03 +0000 (0:00:00.177) 0:00:23.567 ********** 2026-03-09 00:45:03.921772 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b4a24c5-7164-5e55-92cc-433a48be10d0', 'data_vg': 'ceph-0b4a24c5-7164-5e55-92cc-433a48be10d0'})  2026-03-09 00:45:03.921782 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-07cae8b8-d309-58e5-9f3f-3806cd3fe573', 'data_vg': 'ceph-07cae8b8-d309-58e5-9f3f-3806cd3fe573'})  2026-03-09 00:45:03.921798 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:45:03.921808 | orchestrator | 2026-03-09 00:45:03.921818 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-09 00:45:03.921827 | orchestrator | Monday 09 March 2026 00:45:03 +0000 (0:00:00.177) 0:00:23.745 ********** 2026-03-09 00:45:03.921837 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b4a24c5-7164-5e55-92cc-433a48be10d0', 'data_vg': 'ceph-0b4a24c5-7164-5e55-92cc-433a48be10d0'})  2026-03-09 00:45:03.921846 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-07cae8b8-d309-58e5-9f3f-3806cd3fe573', 'data_vg': 'ceph-07cae8b8-d309-58e5-9f3f-3806cd3fe573'})  2026-03-09 00:45:03.921856 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:45:03.921866 | orchestrator | 2026-03-09 00:45:03.921875 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-09 00:45:03.921885 | orchestrator | Monday 09 March 2026 00:45:03 +0000 (0:00:00.177) 0:00:23.923 ********** 2026-03-09 00:45:03.921901 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b4a24c5-7164-5e55-92cc-433a48be10d0', 'data_vg': 'ceph-0b4a24c5-7164-5e55-92cc-433a48be10d0'})  2026-03-09 00:45:09.222979 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-07cae8b8-d309-58e5-9f3f-3806cd3fe573', 'data_vg': 'ceph-07cae8b8-d309-58e5-9f3f-3806cd3fe573'})  2026-03-09 00:45:09.223070 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:45:09.223087 | orchestrator | 2026-03-09 00:45:09.223100 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-09 00:45:09.223112 | orchestrator | Monday 09 March 2026 00:45:03 +0000 (0:00:00.163) 0:00:24.086 ********** 2026-03-09 00:45:09.223124 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b4a24c5-7164-5e55-92cc-433a48be10d0', 'data_vg': 'ceph-0b4a24c5-7164-5e55-92cc-433a48be10d0'})  2026-03-09 00:45:09.223136 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-07cae8b8-d309-58e5-9f3f-3806cd3fe573', 'data_vg': 'ceph-07cae8b8-d309-58e5-9f3f-3806cd3fe573'})  2026-03-09 00:45:09.223147 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:45:09.223158 | orchestrator | 2026-03-09 00:45:09.223169 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-09 00:45:09.223180 | orchestrator | Monday 09 March 2026 00:45:04 +0000 (0:00:00.178) 0:00:24.264 ********** 2026-03-09 00:45:09.223191 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b4a24c5-7164-5e55-92cc-433a48be10d0', 'data_vg': 'ceph-0b4a24c5-7164-5e55-92cc-433a48be10d0'})  2026-03-09 00:45:09.223202 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-07cae8b8-d309-58e5-9f3f-3806cd3fe573', 'data_vg': 'ceph-07cae8b8-d309-58e5-9f3f-3806cd3fe573'})  2026-03-09 00:45:09.223213 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:45:09.223224 | orchestrator | 2026-03-09 00:45:09.223235 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-09 00:45:09.223246 | orchestrator | Monday 09 March 2026 00:45:04 +0000 (0:00:00.184) 0:00:24.448 ********** 2026-03-09 00:45:09.223256 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:45:09.223268 | orchestrator | 2026-03-09 00:45:09.223279 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-09 00:45:09.223290 | orchestrator | Monday 09 March 2026 00:45:04 +0000 (0:00:00.510) 0:00:24.959 ********** 2026-03-09 00:45:09.223300 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:45:09.223311 | orchestrator | 2026-03-09 00:45:09.223322 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-09 00:45:09.223348 | orchestrator | Monday 09 March 2026 00:45:05 +0000 (0:00:00.505) 0:00:25.465 ********** 2026-03-09 00:45:09.223359 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:45:09.223370 | orchestrator | 2026-03-09 00:45:09.223381 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-09 00:45:09.223392 | orchestrator | Monday 09 March 2026 00:45:05 +0000 (0:00:00.149) 0:00:25.614 ********** 2026-03-09 00:45:09.223424 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-07cae8b8-d309-58e5-9f3f-3806cd3fe573', 'vg_name': 'ceph-07cae8b8-d309-58e5-9f3f-3806cd3fe573'}) 2026-03-09 00:45:09.223436 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-0b4a24c5-7164-5e55-92cc-433a48be10d0', 'vg_name': 'ceph-0b4a24c5-7164-5e55-92cc-433a48be10d0'}) 2026-03-09 00:45:09.223447 | orchestrator | 2026-03-09 00:45:09.223458 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-09 00:45:09.223469 | orchestrator | Monday 09 March 2026 00:45:05 +0000 (0:00:00.178) 0:00:25.793 ********** 2026-03-09 00:45:09.223480 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b4a24c5-7164-5e55-92cc-433a48be10d0', 'data_vg': 'ceph-0b4a24c5-7164-5e55-92cc-433a48be10d0'})  2026-03-09 00:45:09.223491 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-07cae8b8-d309-58e5-9f3f-3806cd3fe573', 'data_vg': 'ceph-07cae8b8-d309-58e5-9f3f-3806cd3fe573'})  2026-03-09 00:45:09.223548 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:45:09.223562 | orchestrator | 2026-03-09 00:45:09.223576 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-09 00:45:09.223589 | orchestrator | Monday 09 March 2026 00:45:06 +0000 (0:00:00.392) 0:00:26.185 ********** 2026-03-09 00:45:09.223602 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b4a24c5-7164-5e55-92cc-433a48be10d0', 'data_vg': 'ceph-0b4a24c5-7164-5e55-92cc-433a48be10d0'})  2026-03-09 00:45:09.223615 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-07cae8b8-d309-58e5-9f3f-3806cd3fe573', 'data_vg': 'ceph-07cae8b8-d309-58e5-9f3f-3806cd3fe573'})  2026-03-09 00:45:09.223627 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:45:09.223640 | orchestrator | 2026-03-09 00:45:09.223652 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-09 00:45:09.223665 | orchestrator | Monday 09 March 2026 00:45:06 +0000 (0:00:00.175) 0:00:26.361 ********** 2026-03-09 00:45:09.223679 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b4a24c5-7164-5e55-92cc-433a48be10d0', 'data_vg': 'ceph-0b4a24c5-7164-5e55-92cc-433a48be10d0'})  2026-03-09 00:45:09.223692 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-07cae8b8-d309-58e5-9f3f-3806cd3fe573', 'data_vg': 'ceph-07cae8b8-d309-58e5-9f3f-3806cd3fe573'})  2026-03-09 00:45:09.223706 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:45:09.223718 | orchestrator | 2026-03-09 00:45:09.223729 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-09 00:45:09.223739 | orchestrator | Monday 09 March 2026 00:45:06 +0000 (0:00:00.152) 0:00:26.513 ********** 2026-03-09 00:45:09.223765 | orchestrator | ok: [testbed-node-3] => { 2026-03-09 00:45:09.223777 | orchestrator |  "lvm_report": { 2026-03-09 00:45:09.223789 | orchestrator |  "lv": [ 2026-03-09 00:45:09.223800 | orchestrator |  { 2026-03-09 00:45:09.223812 | orchestrator |  "lv_name": "osd-block-07cae8b8-d309-58e5-9f3f-3806cd3fe573", 2026-03-09 00:45:09.223824 | orchestrator |  "vg_name": "ceph-07cae8b8-d309-58e5-9f3f-3806cd3fe573" 2026-03-09 00:45:09.223835 | orchestrator |  }, 2026-03-09 00:45:09.223846 | orchestrator |  { 2026-03-09 00:45:09.223857 | orchestrator |  "lv_name": "osd-block-0b4a24c5-7164-5e55-92cc-433a48be10d0", 2026-03-09 00:45:09.223868 | orchestrator |  "vg_name": "ceph-0b4a24c5-7164-5e55-92cc-433a48be10d0" 2026-03-09 00:45:09.223879 | orchestrator |  } 2026-03-09 00:45:09.223890 | orchestrator |  ], 2026-03-09 00:45:09.223901 | orchestrator |  "pv": [ 2026-03-09 00:45:09.223911 | orchestrator |  { 2026-03-09 00:45:09.223922 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-09 00:45:09.223934 | orchestrator |  "vg_name": "ceph-0b4a24c5-7164-5e55-92cc-433a48be10d0" 2026-03-09 00:45:09.223944 | orchestrator |  }, 2026-03-09 00:45:09.223955 | orchestrator |  { 2026-03-09 00:45:09.223974 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-09 00:45:09.223985 | orchestrator |  "vg_name": "ceph-07cae8b8-d309-58e5-9f3f-3806cd3fe573" 2026-03-09 00:45:09.223996 | orchestrator |  } 2026-03-09 00:45:09.224007 | orchestrator |  ] 2026-03-09 00:45:09.224018 | orchestrator |  } 2026-03-09 00:45:09.224029 | orchestrator | } 2026-03-09 00:45:09.224040 | orchestrator | 2026-03-09 00:45:09.224052 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-09 00:45:09.224062 | orchestrator | 2026-03-09 00:45:09.224074 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-09 00:45:09.224085 | orchestrator | Monday 09 March 2026 00:45:06 +0000 (0:00:00.305) 0:00:26.818 ********** 2026-03-09 00:45:09.224096 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-09 00:45:09.224107 | orchestrator | 2026-03-09 00:45:09.224117 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-09 00:45:09.224128 | orchestrator | Monday 09 March 2026 00:45:06 +0000 (0:00:00.260) 0:00:27.079 ********** 2026-03-09 00:45:09.224139 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:45:09.224150 | orchestrator | 2026-03-09 00:45:09.224161 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:45:09.224172 | orchestrator | Monday 09 March 2026 00:45:07 +0000 (0:00:00.242) 0:00:27.322 ********** 2026-03-09 00:45:09.224183 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-09 00:45:09.224194 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-09 00:45:09.224205 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-09 00:45:09.224216 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-09 00:45:09.224227 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-09 00:45:09.224238 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-09 00:45:09.224249 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-09 00:45:09.224260 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-09 00:45:09.224270 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-09 00:45:09.224281 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-09 00:45:09.224292 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-09 00:45:09.224303 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-09 00:45:09.224314 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-09 00:45:09.224325 | orchestrator | 2026-03-09 00:45:09.224335 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:45:09.224346 | orchestrator | Monday 09 March 2026 00:45:07 +0000 (0:00:00.454) 0:00:27.777 ********** 2026-03-09 00:45:09.224357 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:09.224368 | orchestrator | 2026-03-09 00:45:09.224379 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:45:09.224397 | orchestrator | Monday 09 March 2026 00:45:07 +0000 (0:00:00.206) 0:00:27.984 ********** 2026-03-09 00:45:09.224408 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:09.224419 | orchestrator | 2026-03-09 00:45:09.224430 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:45:09.224441 | orchestrator | Monday 09 March 2026 00:45:08 +0000 (0:00:00.213) 0:00:28.197 ********** 2026-03-09 00:45:09.224452 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:09.224463 | orchestrator | 2026-03-09 00:45:09.224473 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:45:09.224491 | orchestrator | Monday 09 March 2026 00:45:08 +0000 (0:00:00.535) 0:00:28.733 ********** 2026-03-09 00:45:09.224527 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:09.224538 | orchestrator | 2026-03-09 00:45:09.224549 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:45:09.224560 | orchestrator | Monday 09 March 2026 00:45:08 +0000 (0:00:00.191) 0:00:28.925 ********** 2026-03-09 00:45:09.224571 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:09.224582 | orchestrator | 2026-03-09 00:45:09.224592 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:45:09.224603 | orchestrator | Monday 09 March 2026 00:45:09 +0000 (0:00:00.197) 0:00:29.123 ********** 2026-03-09 00:45:09.224615 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:09.224625 | orchestrator | 2026-03-09 00:45:09.224643 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:45:20.496648 | orchestrator | Monday 09 March 2026 00:45:09 +0000 (0:00:00.180) 0:00:29.303 ********** 2026-03-09 00:45:20.496713 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:20.496724 | orchestrator | 2026-03-09 00:45:20.496732 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:45:20.496739 | orchestrator | Monday 09 March 2026 00:45:09 +0000 (0:00:00.211) 0:00:29.514 ********** 2026-03-09 00:45:20.496746 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:20.496752 | orchestrator | 2026-03-09 00:45:20.496759 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:45:20.496766 | orchestrator | Monday 09 March 2026 00:45:09 +0000 (0:00:00.188) 0:00:29.703 ********** 2026-03-09 00:45:20.496773 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07) 2026-03-09 00:45:20.496781 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07) 2026-03-09 00:45:20.496787 | orchestrator | 2026-03-09 00:45:20.496794 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:45:20.496801 | orchestrator | Monday 09 March 2026 00:45:09 +0000 (0:00:00.382) 0:00:30.085 ********** 2026-03-09 00:45:20.496808 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bc061b31-9341-4fe1-bc4e-7c107d37f2f9) 2026-03-09 00:45:20.496815 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bc061b31-9341-4fe1-bc4e-7c107d37f2f9) 2026-03-09 00:45:20.496822 | orchestrator | 2026-03-09 00:45:20.496828 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:45:20.496834 | orchestrator | Monday 09 March 2026 00:45:10 +0000 (0:00:00.404) 0:00:30.490 ********** 2026-03-09 00:45:20.496841 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_32378689-09a5-476b-b0b0-ef0e7774d8c3) 2026-03-09 00:45:20.496848 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_32378689-09a5-476b-b0b0-ef0e7774d8c3) 2026-03-09 00:45:20.496855 | orchestrator | 2026-03-09 00:45:20.496860 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:45:20.496864 | orchestrator | Monday 09 March 2026 00:45:10 +0000 (0:00:00.362) 0:00:30.853 ********** 2026-03-09 00:45:20.496876 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_96371732-37bf-4fbc-835d-bb1aff74906c) 2026-03-09 00:45:20.496880 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_96371732-37bf-4fbc-835d-bb1aff74906c) 2026-03-09 00:45:20.496884 | orchestrator | 2026-03-09 00:45:20.496888 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:45:20.496892 | orchestrator | Monday 09 March 2026 00:45:11 +0000 (0:00:00.639) 0:00:31.493 ********** 2026-03-09 00:45:20.496896 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-09 00:45:20.496900 | orchestrator | 2026-03-09 00:45:20.496903 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:45:20.496907 | orchestrator | Monday 09 March 2026 00:45:12 +0000 (0:00:00.637) 0:00:32.131 ********** 2026-03-09 00:45:20.496921 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-09 00:45:20.496926 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-09 00:45:20.496929 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-09 00:45:20.496933 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-09 00:45:20.496937 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-09 00:45:20.496940 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-09 00:45:20.496944 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-09 00:45:20.496948 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-09 00:45:20.496952 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-09 00:45:20.496955 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-09 00:45:20.496959 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-09 00:45:20.496963 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-09 00:45:20.496966 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-09 00:45:20.496971 | orchestrator | 2026-03-09 00:45:20.496974 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:45:20.496978 | orchestrator | Monday 09 March 2026 00:45:12 +0000 (0:00:00.706) 0:00:32.838 ********** 2026-03-09 00:45:20.496982 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:20.496986 | orchestrator | 2026-03-09 00:45:20.496989 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:45:20.496993 | orchestrator | Monday 09 March 2026 00:45:12 +0000 (0:00:00.242) 0:00:33.080 ********** 2026-03-09 00:45:20.496997 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:20.497000 | orchestrator | 2026-03-09 00:45:20.497004 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:45:20.497008 | orchestrator | Monday 09 March 2026 00:45:13 +0000 (0:00:00.231) 0:00:33.313 ********** 2026-03-09 00:45:20.497012 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:20.497016 | orchestrator | 2026-03-09 00:45:20.497028 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:45:20.497033 | orchestrator | Monday 09 March 2026 00:45:13 +0000 (0:00:00.261) 0:00:33.574 ********** 2026-03-09 00:45:20.497036 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:20.497040 | orchestrator | 2026-03-09 00:45:20.497044 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:45:20.497048 | orchestrator | Monday 09 March 2026 00:45:13 +0000 (0:00:00.252) 0:00:33.827 ********** 2026-03-09 00:45:20.497051 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:20.497055 | orchestrator | 2026-03-09 00:45:20.497059 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:45:20.497063 | orchestrator | Monday 09 March 2026 00:45:13 +0000 (0:00:00.208) 0:00:34.036 ********** 2026-03-09 00:45:20.497066 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:20.497070 | orchestrator | 2026-03-09 00:45:20.497074 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:45:20.497077 | orchestrator | Monday 09 March 2026 00:45:14 +0000 (0:00:00.263) 0:00:34.299 ********** 2026-03-09 00:45:20.497081 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:20.497085 | orchestrator | 2026-03-09 00:45:20.497088 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:45:20.497092 | orchestrator | Monday 09 March 2026 00:45:14 +0000 (0:00:00.250) 0:00:34.550 ********** 2026-03-09 00:45:20.497101 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:20.497105 | orchestrator | 2026-03-09 00:45:20.497109 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:45:20.497113 | orchestrator | Monday 09 March 2026 00:45:14 +0000 (0:00:00.213) 0:00:34.763 ********** 2026-03-09 00:45:20.497119 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-09 00:45:20.497126 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-09 00:45:20.497132 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-09 00:45:20.497138 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-09 00:45:20.497145 | orchestrator | 2026-03-09 00:45:20.497151 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:45:20.497157 | orchestrator | Monday 09 March 2026 00:45:15 +0000 (0:00:00.926) 0:00:35.689 ********** 2026-03-09 00:45:20.497163 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:20.497169 | orchestrator | 2026-03-09 00:45:20.497175 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:45:20.497180 | orchestrator | Monday 09 March 2026 00:45:15 +0000 (0:00:00.190) 0:00:35.880 ********** 2026-03-09 00:45:20.497188 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:20.497195 | orchestrator | 2026-03-09 00:45:20.497201 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:45:20.497207 | orchestrator | Monday 09 March 2026 00:45:16 +0000 (0:00:00.737) 0:00:36.617 ********** 2026-03-09 00:45:20.497214 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:20.497220 | orchestrator | 2026-03-09 00:45:20.497225 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:45:20.497229 | orchestrator | Monday 09 March 2026 00:45:16 +0000 (0:00:00.215) 0:00:36.834 ********** 2026-03-09 00:45:20.497233 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:20.497238 | orchestrator | 2026-03-09 00:45:20.497242 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-09 00:45:20.497247 | orchestrator | Monday 09 March 2026 00:45:16 +0000 (0:00:00.203) 0:00:37.038 ********** 2026-03-09 00:45:20.497251 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:20.497256 | orchestrator | 2026-03-09 00:45:20.497260 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-09 00:45:20.497265 | orchestrator | Monday 09 March 2026 00:45:17 +0000 (0:00:00.157) 0:00:37.195 ********** 2026-03-09 00:45:20.497269 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9c74837a-43e3-5ea9-9fe0-5cec11260b17'}}) 2026-03-09 00:45:20.497274 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '590958f1-5006-5da8-896c-bdb08f0ac33f'}}) 2026-03-09 00:45:20.497278 | orchestrator | 2026-03-09 00:45:20.497282 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-09 00:45:20.497287 | orchestrator | Monday 09 March 2026 00:45:17 +0000 (0:00:00.193) 0:00:37.388 ********** 2026-03-09 00:45:20.497292 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-9c74837a-43e3-5ea9-9fe0-5cec11260b17', 'data_vg': 'ceph-9c74837a-43e3-5ea9-9fe0-5cec11260b17'}) 2026-03-09 00:45:20.497297 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-590958f1-5006-5da8-896c-bdb08f0ac33f', 'data_vg': 'ceph-590958f1-5006-5da8-896c-bdb08f0ac33f'}) 2026-03-09 00:45:20.497301 | orchestrator | 2026-03-09 00:45:20.497306 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-09 00:45:20.497310 | orchestrator | Monday 09 March 2026 00:45:19 +0000 (0:00:01.851) 0:00:39.239 ********** 2026-03-09 00:45:20.497315 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9c74837a-43e3-5ea9-9fe0-5cec11260b17', 'data_vg': 'ceph-9c74837a-43e3-5ea9-9fe0-5cec11260b17'})  2026-03-09 00:45:20.497320 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-590958f1-5006-5da8-896c-bdb08f0ac33f', 'data_vg': 'ceph-590958f1-5006-5da8-896c-bdb08f0ac33f'})  2026-03-09 00:45:20.497328 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:20.497332 | orchestrator | 2026-03-09 00:45:20.497337 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-09 00:45:20.497341 | orchestrator | Monday 09 March 2026 00:45:19 +0000 (0:00:00.151) 0:00:39.390 ********** 2026-03-09 00:45:20.497346 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-9c74837a-43e3-5ea9-9fe0-5cec11260b17', 'data_vg': 'ceph-9c74837a-43e3-5ea9-9fe0-5cec11260b17'}) 2026-03-09 00:45:20.497356 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-590958f1-5006-5da8-896c-bdb08f0ac33f', 'data_vg': 'ceph-590958f1-5006-5da8-896c-bdb08f0ac33f'}) 2026-03-09 00:45:25.682889 | orchestrator | 2026-03-09 00:45:25.682996 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-09 00:45:25.683014 | orchestrator | Monday 09 March 2026 00:45:20 +0000 (0:00:01.268) 0:00:40.659 ********** 2026-03-09 00:45:25.683027 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9c74837a-43e3-5ea9-9fe0-5cec11260b17', 'data_vg': 'ceph-9c74837a-43e3-5ea9-9fe0-5cec11260b17'})  2026-03-09 00:45:25.683041 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-590958f1-5006-5da8-896c-bdb08f0ac33f', 'data_vg': 'ceph-590958f1-5006-5da8-896c-bdb08f0ac33f'})  2026-03-09 00:45:25.683053 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:25.683065 | orchestrator | 2026-03-09 00:45:25.683077 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-09 00:45:25.683088 | orchestrator | Monday 09 March 2026 00:45:20 +0000 (0:00:00.141) 0:00:40.800 ********** 2026-03-09 00:45:25.683099 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:25.683110 | orchestrator | 2026-03-09 00:45:25.683121 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-09 00:45:25.683132 | orchestrator | Monday 09 March 2026 00:45:20 +0000 (0:00:00.135) 0:00:40.936 ********** 2026-03-09 00:45:25.683147 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9c74837a-43e3-5ea9-9fe0-5cec11260b17', 'data_vg': 'ceph-9c74837a-43e3-5ea9-9fe0-5cec11260b17'})  2026-03-09 00:45:25.683166 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-590958f1-5006-5da8-896c-bdb08f0ac33f', 'data_vg': 'ceph-590958f1-5006-5da8-896c-bdb08f0ac33f'})  2026-03-09 00:45:25.683184 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:25.683202 | orchestrator | 2026-03-09 00:45:25.683221 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-09 00:45:25.683239 | orchestrator | Monday 09 March 2026 00:45:20 +0000 (0:00:00.142) 0:00:41.079 ********** 2026-03-09 00:45:25.683257 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:25.683276 | orchestrator | 2026-03-09 00:45:25.683292 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-09 00:45:25.683304 | orchestrator | Monday 09 March 2026 00:45:21 +0000 (0:00:00.122) 0:00:41.201 ********** 2026-03-09 00:45:25.683315 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9c74837a-43e3-5ea9-9fe0-5cec11260b17', 'data_vg': 'ceph-9c74837a-43e3-5ea9-9fe0-5cec11260b17'})  2026-03-09 00:45:25.683326 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-590958f1-5006-5da8-896c-bdb08f0ac33f', 'data_vg': 'ceph-590958f1-5006-5da8-896c-bdb08f0ac33f'})  2026-03-09 00:45:25.683337 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:25.683348 | orchestrator | 2026-03-09 00:45:25.683359 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-09 00:45:25.683370 | orchestrator | Monday 09 March 2026 00:45:21 +0000 (0:00:00.289) 0:00:41.490 ********** 2026-03-09 00:45:25.683381 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:25.683391 | orchestrator | 2026-03-09 00:45:25.683402 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-09 00:45:25.683413 | orchestrator | Monday 09 March 2026 00:45:21 +0000 (0:00:00.126) 0:00:41.616 ********** 2026-03-09 00:45:25.683424 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9c74837a-43e3-5ea9-9fe0-5cec11260b17', 'data_vg': 'ceph-9c74837a-43e3-5ea9-9fe0-5cec11260b17'})  2026-03-09 00:45:25.683462 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-590958f1-5006-5da8-896c-bdb08f0ac33f', 'data_vg': 'ceph-590958f1-5006-5da8-896c-bdb08f0ac33f'})  2026-03-09 00:45:25.683473 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:25.683485 | orchestrator | 2026-03-09 00:45:25.683549 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-09 00:45:25.683579 | orchestrator | Monday 09 March 2026 00:45:21 +0000 (0:00:00.147) 0:00:41.764 ********** 2026-03-09 00:45:25.683592 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:45:25.683613 | orchestrator | 2026-03-09 00:45:25.683633 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-09 00:45:25.683652 | orchestrator | Monday 09 March 2026 00:45:21 +0000 (0:00:00.123) 0:00:41.888 ********** 2026-03-09 00:45:25.683671 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9c74837a-43e3-5ea9-9fe0-5cec11260b17', 'data_vg': 'ceph-9c74837a-43e3-5ea9-9fe0-5cec11260b17'})  2026-03-09 00:45:25.683689 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-590958f1-5006-5da8-896c-bdb08f0ac33f', 'data_vg': 'ceph-590958f1-5006-5da8-896c-bdb08f0ac33f'})  2026-03-09 00:45:25.683709 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:25.683727 | orchestrator | 2026-03-09 00:45:25.683745 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-09 00:45:25.683765 | orchestrator | Monday 09 March 2026 00:45:21 +0000 (0:00:00.137) 0:00:42.025 ********** 2026-03-09 00:45:25.683784 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9c74837a-43e3-5ea9-9fe0-5cec11260b17', 'data_vg': 'ceph-9c74837a-43e3-5ea9-9fe0-5cec11260b17'})  2026-03-09 00:45:25.683803 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-590958f1-5006-5da8-896c-bdb08f0ac33f', 'data_vg': 'ceph-590958f1-5006-5da8-896c-bdb08f0ac33f'})  2026-03-09 00:45:25.683822 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:25.683839 | orchestrator | 2026-03-09 00:45:25.683856 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-09 00:45:25.683902 | orchestrator | Monday 09 March 2026 00:45:22 +0000 (0:00:00.141) 0:00:42.167 ********** 2026-03-09 00:45:25.683922 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9c74837a-43e3-5ea9-9fe0-5cec11260b17', 'data_vg': 'ceph-9c74837a-43e3-5ea9-9fe0-5cec11260b17'})  2026-03-09 00:45:25.683942 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-590958f1-5006-5da8-896c-bdb08f0ac33f', 'data_vg': 'ceph-590958f1-5006-5da8-896c-bdb08f0ac33f'})  2026-03-09 00:45:25.683961 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:25.683979 | orchestrator | 2026-03-09 00:45:25.683999 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-09 00:45:25.684018 | orchestrator | Monday 09 March 2026 00:45:22 +0000 (0:00:00.145) 0:00:42.312 ********** 2026-03-09 00:45:25.684038 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:25.684059 | orchestrator | 2026-03-09 00:45:25.684078 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-09 00:45:25.684098 | orchestrator | Monday 09 March 2026 00:45:22 +0000 (0:00:00.120) 0:00:42.433 ********** 2026-03-09 00:45:25.684116 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:25.684137 | orchestrator | 2026-03-09 00:45:25.684158 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-09 00:45:25.684178 | orchestrator | Monday 09 March 2026 00:45:22 +0000 (0:00:00.128) 0:00:42.562 ********** 2026-03-09 00:45:25.684198 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:25.684217 | orchestrator | 2026-03-09 00:45:25.684237 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-09 00:45:25.684255 | orchestrator | Monday 09 March 2026 00:45:22 +0000 (0:00:00.123) 0:00:42.685 ********** 2026-03-09 00:45:25.684275 | orchestrator | ok: [testbed-node-4] => { 2026-03-09 00:45:25.684296 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-09 00:45:25.684332 | orchestrator | } 2026-03-09 00:45:25.684351 | orchestrator | 2026-03-09 00:45:25.684369 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-09 00:45:25.684388 | orchestrator | Monday 09 March 2026 00:45:22 +0000 (0:00:00.131) 0:00:42.817 ********** 2026-03-09 00:45:25.684405 | orchestrator | ok: [testbed-node-4] => { 2026-03-09 00:45:25.684422 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-09 00:45:25.684441 | orchestrator | } 2026-03-09 00:45:25.684460 | orchestrator | 2026-03-09 00:45:25.684529 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-09 00:45:25.684554 | orchestrator | Monday 09 March 2026 00:45:22 +0000 (0:00:00.132) 0:00:42.949 ********** 2026-03-09 00:45:25.684574 | orchestrator | ok: [testbed-node-4] => { 2026-03-09 00:45:25.684592 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-09 00:45:25.684612 | orchestrator | } 2026-03-09 00:45:25.684629 | orchestrator | 2026-03-09 00:45:25.684648 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-09 00:45:25.684668 | orchestrator | Monday 09 March 2026 00:45:23 +0000 (0:00:00.271) 0:00:43.220 ********** 2026-03-09 00:45:25.684686 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:45:25.684704 | orchestrator | 2026-03-09 00:45:25.684722 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-09 00:45:25.684733 | orchestrator | Monday 09 March 2026 00:45:23 +0000 (0:00:00.498) 0:00:43.719 ********** 2026-03-09 00:45:25.684744 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:45:25.684755 | orchestrator | 2026-03-09 00:45:25.684766 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-09 00:45:25.684776 | orchestrator | Monday 09 March 2026 00:45:24 +0000 (0:00:00.518) 0:00:44.238 ********** 2026-03-09 00:45:25.684787 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:45:25.684798 | orchestrator | 2026-03-09 00:45:25.684808 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-09 00:45:25.684819 | orchestrator | Monday 09 March 2026 00:45:24 +0000 (0:00:00.500) 0:00:44.738 ********** 2026-03-09 00:45:25.684830 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:45:25.684840 | orchestrator | 2026-03-09 00:45:25.684851 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-09 00:45:25.684863 | orchestrator | Monday 09 March 2026 00:45:24 +0000 (0:00:00.136) 0:00:44.874 ********** 2026-03-09 00:45:25.684882 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:25.684897 | orchestrator | 2026-03-09 00:45:25.684924 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-09 00:45:25.684944 | orchestrator | Monday 09 March 2026 00:45:24 +0000 (0:00:00.097) 0:00:44.972 ********** 2026-03-09 00:45:25.684962 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:25.684980 | orchestrator | 2026-03-09 00:45:25.684998 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-09 00:45:25.685016 | orchestrator | Monday 09 March 2026 00:45:24 +0000 (0:00:00.112) 0:00:45.085 ********** 2026-03-09 00:45:25.685035 | orchestrator | ok: [testbed-node-4] => { 2026-03-09 00:45:25.685053 | orchestrator |  "vgs_report": { 2026-03-09 00:45:25.685073 | orchestrator |  "vg": [] 2026-03-09 00:45:25.685093 | orchestrator |  } 2026-03-09 00:45:25.685111 | orchestrator | } 2026-03-09 00:45:25.685127 | orchestrator | 2026-03-09 00:45:25.685138 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-09 00:45:25.685149 | orchestrator | Monday 09 March 2026 00:45:25 +0000 (0:00:00.136) 0:00:45.221 ********** 2026-03-09 00:45:25.685160 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:25.685171 | orchestrator | 2026-03-09 00:45:25.685182 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-09 00:45:25.685193 | orchestrator | Monday 09 March 2026 00:45:25 +0000 (0:00:00.131) 0:00:45.353 ********** 2026-03-09 00:45:25.685204 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:25.685214 | orchestrator | 2026-03-09 00:45:25.685225 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-09 00:45:25.685248 | orchestrator | Monday 09 March 2026 00:45:25 +0000 (0:00:00.121) 0:00:45.474 ********** 2026-03-09 00:45:25.685259 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:25.685270 | orchestrator | 2026-03-09 00:45:25.685281 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-09 00:45:25.685292 | orchestrator | Monday 09 March 2026 00:45:25 +0000 (0:00:00.146) 0:00:45.620 ********** 2026-03-09 00:45:25.685303 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:25.685314 | orchestrator | 2026-03-09 00:45:25.685340 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-09 00:45:30.503086 | orchestrator | Monday 09 March 2026 00:45:25 +0000 (0:00:00.142) 0:00:45.763 ********** 2026-03-09 00:45:30.503178 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:30.503188 | orchestrator | 2026-03-09 00:45:30.503196 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-09 00:45:30.503202 | orchestrator | Monday 09 March 2026 00:45:26 +0000 (0:00:00.365) 0:00:46.128 ********** 2026-03-09 00:45:30.503209 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:30.503214 | orchestrator | 2026-03-09 00:45:30.503221 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-09 00:45:30.503227 | orchestrator | Monday 09 March 2026 00:45:26 +0000 (0:00:00.136) 0:00:46.265 ********** 2026-03-09 00:45:30.503233 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:30.503238 | orchestrator | 2026-03-09 00:45:30.503244 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-09 00:45:30.503250 | orchestrator | Monday 09 March 2026 00:45:26 +0000 (0:00:00.149) 0:00:46.414 ********** 2026-03-09 00:45:30.503257 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:30.503263 | orchestrator | 2026-03-09 00:45:30.503270 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-09 00:45:30.503276 | orchestrator | Monday 09 March 2026 00:45:26 +0000 (0:00:00.145) 0:00:46.559 ********** 2026-03-09 00:45:30.503282 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:30.503291 | orchestrator | 2026-03-09 00:45:30.503297 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-09 00:45:30.503304 | orchestrator | Monday 09 March 2026 00:45:26 +0000 (0:00:00.154) 0:00:46.714 ********** 2026-03-09 00:45:30.503310 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:30.503315 | orchestrator | 2026-03-09 00:45:30.503322 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-09 00:45:30.503328 | orchestrator | Monday 09 March 2026 00:45:26 +0000 (0:00:00.148) 0:00:46.863 ********** 2026-03-09 00:45:30.503334 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:30.503340 | orchestrator | 2026-03-09 00:45:30.503346 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-09 00:45:30.503353 | orchestrator | Monday 09 March 2026 00:45:26 +0000 (0:00:00.140) 0:00:47.003 ********** 2026-03-09 00:45:30.503379 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:30.503387 | orchestrator | 2026-03-09 00:45:30.503394 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-09 00:45:30.503403 | orchestrator | Monday 09 March 2026 00:45:27 +0000 (0:00:00.147) 0:00:47.150 ********** 2026-03-09 00:45:30.503410 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:30.503417 | orchestrator | 2026-03-09 00:45:30.503426 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-09 00:45:30.503435 | orchestrator | Monday 09 March 2026 00:45:27 +0000 (0:00:00.136) 0:00:47.286 ********** 2026-03-09 00:45:30.503441 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:30.503448 | orchestrator | 2026-03-09 00:45:30.503458 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-09 00:45:30.503465 | orchestrator | Monday 09 March 2026 00:45:27 +0000 (0:00:00.148) 0:00:47.435 ********** 2026-03-09 00:45:30.503473 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9c74837a-43e3-5ea9-9fe0-5cec11260b17', 'data_vg': 'ceph-9c74837a-43e3-5ea9-9fe0-5cec11260b17'})  2026-03-09 00:45:30.503555 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-590958f1-5006-5da8-896c-bdb08f0ac33f', 'data_vg': 'ceph-590958f1-5006-5da8-896c-bdb08f0ac33f'})  2026-03-09 00:45:30.503561 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:30.503567 | orchestrator | 2026-03-09 00:45:30.503573 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-09 00:45:30.503579 | orchestrator | Monday 09 March 2026 00:45:27 +0000 (0:00:00.155) 0:00:47.591 ********** 2026-03-09 00:45:30.503585 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9c74837a-43e3-5ea9-9fe0-5cec11260b17', 'data_vg': 'ceph-9c74837a-43e3-5ea9-9fe0-5cec11260b17'})  2026-03-09 00:45:30.503591 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-590958f1-5006-5da8-896c-bdb08f0ac33f', 'data_vg': 'ceph-590958f1-5006-5da8-896c-bdb08f0ac33f'})  2026-03-09 00:45:30.503598 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:30.503604 | orchestrator | 2026-03-09 00:45:30.503611 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-09 00:45:30.503617 | orchestrator | Monday 09 March 2026 00:45:27 +0000 (0:00:00.145) 0:00:47.736 ********** 2026-03-09 00:45:30.503623 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9c74837a-43e3-5ea9-9fe0-5cec11260b17', 'data_vg': 'ceph-9c74837a-43e3-5ea9-9fe0-5cec11260b17'})  2026-03-09 00:45:30.503630 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-590958f1-5006-5da8-896c-bdb08f0ac33f', 'data_vg': 'ceph-590958f1-5006-5da8-896c-bdb08f0ac33f'})  2026-03-09 00:45:30.503637 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:30.503644 | orchestrator | 2026-03-09 00:45:30.503651 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-09 00:45:30.503664 | orchestrator | Monday 09 March 2026 00:45:27 +0000 (0:00:00.159) 0:00:47.896 ********** 2026-03-09 00:45:30.503671 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9c74837a-43e3-5ea9-9fe0-5cec11260b17', 'data_vg': 'ceph-9c74837a-43e3-5ea9-9fe0-5cec11260b17'})  2026-03-09 00:45:30.503678 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-590958f1-5006-5da8-896c-bdb08f0ac33f', 'data_vg': 'ceph-590958f1-5006-5da8-896c-bdb08f0ac33f'})  2026-03-09 00:45:30.503684 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:30.503690 | orchestrator | 2026-03-09 00:45:30.503714 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-09 00:45:30.503722 | orchestrator | Monday 09 March 2026 00:45:28 +0000 (0:00:00.371) 0:00:48.267 ********** 2026-03-09 00:45:30.503729 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9c74837a-43e3-5ea9-9fe0-5cec11260b17', 'data_vg': 'ceph-9c74837a-43e3-5ea9-9fe0-5cec11260b17'})  2026-03-09 00:45:30.503736 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-590958f1-5006-5da8-896c-bdb08f0ac33f', 'data_vg': 'ceph-590958f1-5006-5da8-896c-bdb08f0ac33f'})  2026-03-09 00:45:30.503743 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:30.503749 | orchestrator | 2026-03-09 00:45:30.503755 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-09 00:45:30.503761 | orchestrator | Monday 09 March 2026 00:45:28 +0000 (0:00:00.155) 0:00:48.422 ********** 2026-03-09 00:45:30.503768 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9c74837a-43e3-5ea9-9fe0-5cec11260b17', 'data_vg': 'ceph-9c74837a-43e3-5ea9-9fe0-5cec11260b17'})  2026-03-09 00:45:30.503774 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-590958f1-5006-5da8-896c-bdb08f0ac33f', 'data_vg': 'ceph-590958f1-5006-5da8-896c-bdb08f0ac33f'})  2026-03-09 00:45:30.503780 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:30.503786 | orchestrator | 2026-03-09 00:45:30.503792 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-09 00:45:30.503798 | orchestrator | Monday 09 March 2026 00:45:28 +0000 (0:00:00.177) 0:00:48.600 ********** 2026-03-09 00:45:30.503805 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9c74837a-43e3-5ea9-9fe0-5cec11260b17', 'data_vg': 'ceph-9c74837a-43e3-5ea9-9fe0-5cec11260b17'})  2026-03-09 00:45:30.503821 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-590958f1-5006-5da8-896c-bdb08f0ac33f', 'data_vg': 'ceph-590958f1-5006-5da8-896c-bdb08f0ac33f'})  2026-03-09 00:45:30.503827 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:30.503833 | orchestrator | 2026-03-09 00:45:30.503840 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-09 00:45:30.503846 | orchestrator | Monday 09 March 2026 00:45:28 +0000 (0:00:00.166) 0:00:48.766 ********** 2026-03-09 00:45:30.503853 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9c74837a-43e3-5ea9-9fe0-5cec11260b17', 'data_vg': 'ceph-9c74837a-43e3-5ea9-9fe0-5cec11260b17'})  2026-03-09 00:45:30.503860 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-590958f1-5006-5da8-896c-bdb08f0ac33f', 'data_vg': 'ceph-590958f1-5006-5da8-896c-bdb08f0ac33f'})  2026-03-09 00:45:30.503867 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:30.503873 | orchestrator | 2026-03-09 00:45:30.503880 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-09 00:45:30.503886 | orchestrator | Monday 09 March 2026 00:45:28 +0000 (0:00:00.185) 0:00:48.952 ********** 2026-03-09 00:45:30.503894 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:45:30.503901 | orchestrator | 2026-03-09 00:45:30.503908 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-09 00:45:30.503914 | orchestrator | Monday 09 March 2026 00:45:29 +0000 (0:00:00.549) 0:00:49.501 ********** 2026-03-09 00:45:30.503921 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:45:30.503926 | orchestrator | 2026-03-09 00:45:30.503932 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-09 00:45:30.503938 | orchestrator | Monday 09 March 2026 00:45:29 +0000 (0:00:00.495) 0:00:49.997 ********** 2026-03-09 00:45:30.503944 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:45:30.503950 | orchestrator | 2026-03-09 00:45:30.503956 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-09 00:45:30.503962 | orchestrator | Monday 09 March 2026 00:45:30 +0000 (0:00:00.154) 0:00:50.151 ********** 2026-03-09 00:45:30.503969 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-590958f1-5006-5da8-896c-bdb08f0ac33f', 'vg_name': 'ceph-590958f1-5006-5da8-896c-bdb08f0ac33f'}) 2026-03-09 00:45:30.503976 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-9c74837a-43e3-5ea9-9fe0-5cec11260b17', 'vg_name': 'ceph-9c74837a-43e3-5ea9-9fe0-5cec11260b17'}) 2026-03-09 00:45:30.503982 | orchestrator | 2026-03-09 00:45:30.503988 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-09 00:45:30.503994 | orchestrator | Monday 09 March 2026 00:45:30 +0000 (0:00:00.184) 0:00:50.336 ********** 2026-03-09 00:45:30.504000 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9c74837a-43e3-5ea9-9fe0-5cec11260b17', 'data_vg': 'ceph-9c74837a-43e3-5ea9-9fe0-5cec11260b17'})  2026-03-09 00:45:30.504007 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-590958f1-5006-5da8-896c-bdb08f0ac33f', 'data_vg': 'ceph-590958f1-5006-5da8-896c-bdb08f0ac33f'})  2026-03-09 00:45:30.504013 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:30.504019 | orchestrator | 2026-03-09 00:45:30.504026 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-09 00:45:30.504032 | orchestrator | Monday 09 March 2026 00:45:30 +0000 (0:00:00.157) 0:00:50.494 ********** 2026-03-09 00:45:30.504039 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9c74837a-43e3-5ea9-9fe0-5cec11260b17', 'data_vg': 'ceph-9c74837a-43e3-5ea9-9fe0-5cec11260b17'})  2026-03-09 00:45:30.504056 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-590958f1-5006-5da8-896c-bdb08f0ac33f', 'data_vg': 'ceph-590958f1-5006-5da8-896c-bdb08f0ac33f'})  2026-03-09 00:45:36.970192 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:36.970304 | orchestrator | 2026-03-09 00:45:36.970329 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-09 00:45:36.970350 | orchestrator | Monday 09 March 2026 00:45:30 +0000 (0:00:00.179) 0:00:50.674 ********** 2026-03-09 00:45:36.970369 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9c74837a-43e3-5ea9-9fe0-5cec11260b17', 'data_vg': 'ceph-9c74837a-43e3-5ea9-9fe0-5cec11260b17'})  2026-03-09 00:45:36.970389 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-590958f1-5006-5da8-896c-bdb08f0ac33f', 'data_vg': 'ceph-590958f1-5006-5da8-896c-bdb08f0ac33f'})  2026-03-09 00:45:36.970406 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:45:36.970424 | orchestrator | 2026-03-09 00:45:36.970441 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-09 00:45:36.970460 | orchestrator | Monday 09 March 2026 00:45:30 +0000 (0:00:00.154) 0:00:50.828 ********** 2026-03-09 00:45:36.970479 | orchestrator | ok: [testbed-node-4] => { 2026-03-09 00:45:36.970540 | orchestrator |  "lvm_report": { 2026-03-09 00:45:36.970561 | orchestrator |  "lv": [ 2026-03-09 00:45:36.970580 | orchestrator |  { 2026-03-09 00:45:36.970601 | orchestrator |  "lv_name": "osd-block-590958f1-5006-5da8-896c-bdb08f0ac33f", 2026-03-09 00:45:36.970619 | orchestrator |  "vg_name": "ceph-590958f1-5006-5da8-896c-bdb08f0ac33f" 2026-03-09 00:45:36.970634 | orchestrator |  }, 2026-03-09 00:45:36.970652 | orchestrator |  { 2026-03-09 00:45:36.970672 | orchestrator |  "lv_name": "osd-block-9c74837a-43e3-5ea9-9fe0-5cec11260b17", 2026-03-09 00:45:36.970692 | orchestrator |  "vg_name": "ceph-9c74837a-43e3-5ea9-9fe0-5cec11260b17" 2026-03-09 00:45:36.970710 | orchestrator |  } 2026-03-09 00:45:36.970729 | orchestrator |  ], 2026-03-09 00:45:36.970749 | orchestrator |  "pv": [ 2026-03-09 00:45:36.970769 | orchestrator |  { 2026-03-09 00:45:36.970787 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-09 00:45:36.970815 | orchestrator |  "vg_name": "ceph-9c74837a-43e3-5ea9-9fe0-5cec11260b17" 2026-03-09 00:45:36.970834 | orchestrator |  }, 2026-03-09 00:45:36.970853 | orchestrator |  { 2026-03-09 00:45:36.970871 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-09 00:45:36.970890 | orchestrator |  "vg_name": "ceph-590958f1-5006-5da8-896c-bdb08f0ac33f" 2026-03-09 00:45:36.970908 | orchestrator |  } 2026-03-09 00:45:36.970927 | orchestrator |  ] 2026-03-09 00:45:36.970946 | orchestrator |  } 2026-03-09 00:45:36.970965 | orchestrator | } 2026-03-09 00:45:36.970985 | orchestrator | 2026-03-09 00:45:36.971004 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-09 00:45:36.971024 | orchestrator | 2026-03-09 00:45:36.971036 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-09 00:45:36.971047 | orchestrator | Monday 09 March 2026 00:45:31 +0000 (0:00:00.527) 0:00:51.355 ********** 2026-03-09 00:45:36.971058 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-09 00:45:36.971069 | orchestrator | 2026-03-09 00:45:36.971080 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-09 00:45:36.971090 | orchestrator | Monday 09 March 2026 00:45:31 +0000 (0:00:00.264) 0:00:51.619 ********** 2026-03-09 00:45:36.971101 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:45:36.971112 | orchestrator | 2026-03-09 00:45:36.971123 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:45:36.971134 | orchestrator | Monday 09 March 2026 00:45:31 +0000 (0:00:00.270) 0:00:51.890 ********** 2026-03-09 00:45:36.971145 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-09 00:45:36.971155 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-09 00:45:36.971166 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-09 00:45:36.971177 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-09 00:45:36.971198 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-09 00:45:36.971209 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-09 00:45:36.971220 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-09 00:45:36.971230 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-09 00:45:36.971241 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-09 00:45:36.971256 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-09 00:45:36.971267 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-09 00:45:36.971278 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-09 00:45:36.971288 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-09 00:45:36.971299 | orchestrator | 2026-03-09 00:45:36.971310 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:45:36.971320 | orchestrator | Monday 09 March 2026 00:45:32 +0000 (0:00:00.475) 0:00:52.365 ********** 2026-03-09 00:45:36.971331 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:36.971342 | orchestrator | 2026-03-09 00:45:36.971353 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:45:36.971363 | orchestrator | Monday 09 March 2026 00:45:32 +0000 (0:00:00.205) 0:00:52.570 ********** 2026-03-09 00:45:36.971374 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:36.971385 | orchestrator | 2026-03-09 00:45:36.971396 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:45:36.971429 | orchestrator | Monday 09 March 2026 00:45:32 +0000 (0:00:00.230) 0:00:52.801 ********** 2026-03-09 00:45:36.971449 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:36.971466 | orchestrator | 2026-03-09 00:45:36.971484 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:45:36.971547 | orchestrator | Monday 09 March 2026 00:45:32 +0000 (0:00:00.243) 0:00:53.044 ********** 2026-03-09 00:45:36.971567 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:36.971586 | orchestrator | 2026-03-09 00:45:36.971601 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:45:36.971612 | orchestrator | Monday 09 March 2026 00:45:33 +0000 (0:00:00.232) 0:00:53.277 ********** 2026-03-09 00:45:36.971622 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:36.971633 | orchestrator | 2026-03-09 00:45:36.971644 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:45:36.971654 | orchestrator | Monday 09 March 2026 00:45:33 +0000 (0:00:00.224) 0:00:53.501 ********** 2026-03-09 00:45:36.971665 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:36.971675 | orchestrator | 2026-03-09 00:45:36.971686 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:45:36.971697 | orchestrator | Monday 09 March 2026 00:45:34 +0000 (0:00:00.664) 0:00:54.166 ********** 2026-03-09 00:45:36.971708 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:36.971718 | orchestrator | 2026-03-09 00:45:36.971729 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:45:36.971740 | orchestrator | Monday 09 March 2026 00:45:34 +0000 (0:00:00.207) 0:00:54.374 ********** 2026-03-09 00:45:36.971750 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:36.971761 | orchestrator | 2026-03-09 00:45:36.971771 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:45:36.971782 | orchestrator | Monday 09 March 2026 00:45:34 +0000 (0:00:00.207) 0:00:54.581 ********** 2026-03-09 00:45:36.971793 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847) 2026-03-09 00:45:36.971811 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847) 2026-03-09 00:45:36.971829 | orchestrator | 2026-03-09 00:45:36.971840 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:45:36.971850 | orchestrator | Monday 09 March 2026 00:45:34 +0000 (0:00:00.431) 0:00:55.012 ********** 2026-03-09 00:45:36.971861 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_af02e055-7e15-40a4-be69-d990d822f0ba) 2026-03-09 00:45:36.971872 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_af02e055-7e15-40a4-be69-d990d822f0ba) 2026-03-09 00:45:36.971882 | orchestrator | 2026-03-09 00:45:36.971893 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:45:36.971904 | orchestrator | Monday 09 March 2026 00:45:35 +0000 (0:00:00.454) 0:00:55.467 ********** 2026-03-09 00:45:36.971914 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9db61a68-6a19-4ffe-9dc6-6109c8ad90ec) 2026-03-09 00:45:36.971925 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9db61a68-6a19-4ffe-9dc6-6109c8ad90ec) 2026-03-09 00:45:36.971936 | orchestrator | 2026-03-09 00:45:36.971947 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:45:36.971957 | orchestrator | Monday 09 March 2026 00:45:35 +0000 (0:00:00.465) 0:00:55.933 ********** 2026-03-09 00:45:36.971968 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5e6a3ca4-1946-4dac-9dc1-38bfb1214560) 2026-03-09 00:45:36.971979 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5e6a3ca4-1946-4dac-9dc1-38bfb1214560) 2026-03-09 00:45:36.971989 | orchestrator | 2026-03-09 00:45:36.972000 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:45:36.972011 | orchestrator | Monday 09 March 2026 00:45:36 +0000 (0:00:00.467) 0:00:56.400 ********** 2026-03-09 00:45:36.972021 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-09 00:45:36.972032 | orchestrator | 2026-03-09 00:45:36.972043 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:45:36.972053 | orchestrator | Monday 09 March 2026 00:45:36 +0000 (0:00:00.318) 0:00:56.719 ********** 2026-03-09 00:45:36.972064 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-09 00:45:36.972074 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-09 00:45:36.972085 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-09 00:45:36.972096 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-09 00:45:36.972106 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-09 00:45:36.972117 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-09 00:45:36.972127 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-09 00:45:36.972138 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-09 00:45:36.972148 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-09 00:45:36.972159 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-09 00:45:36.972170 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-09 00:45:36.972190 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-09 00:45:45.723056 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-09 00:45:45.723139 | orchestrator | 2026-03-09 00:45:45.723146 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:45:45.723151 | orchestrator | Monday 09 March 2026 00:45:37 +0000 (0:00:00.429) 0:00:57.148 ********** 2026-03-09 00:45:45.723180 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:45.723186 | orchestrator | 2026-03-09 00:45:45.723190 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:45:45.723194 | orchestrator | Monday 09 March 2026 00:45:37 +0000 (0:00:00.196) 0:00:57.345 ********** 2026-03-09 00:45:45.723197 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:45.723207 | orchestrator | 2026-03-09 00:45:45.723211 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:45:45.723215 | orchestrator | Monday 09 March 2026 00:45:37 +0000 (0:00:00.476) 0:00:57.822 ********** 2026-03-09 00:45:45.723219 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:45.723223 | orchestrator | 2026-03-09 00:45:45.723227 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:45:45.723231 | orchestrator | Monday 09 March 2026 00:45:37 +0000 (0:00:00.193) 0:00:58.015 ********** 2026-03-09 00:45:45.723235 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:45.723238 | orchestrator | 2026-03-09 00:45:45.723242 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:45:45.723246 | orchestrator | Monday 09 March 2026 00:45:38 +0000 (0:00:00.227) 0:00:58.243 ********** 2026-03-09 00:45:45.723250 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:45.723253 | orchestrator | 2026-03-09 00:45:45.723257 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:45:45.723261 | orchestrator | Monday 09 March 2026 00:45:38 +0000 (0:00:00.187) 0:00:58.430 ********** 2026-03-09 00:45:45.723264 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:45.723268 | orchestrator | 2026-03-09 00:45:45.723282 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:45:45.723286 | orchestrator | Monday 09 March 2026 00:45:38 +0000 (0:00:00.187) 0:00:58.617 ********** 2026-03-09 00:45:45.723290 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:45.723293 | orchestrator | 2026-03-09 00:45:45.723297 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:45:45.723301 | orchestrator | Monday 09 March 2026 00:45:38 +0000 (0:00:00.183) 0:00:58.801 ********** 2026-03-09 00:45:45.723305 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:45.723308 | orchestrator | 2026-03-09 00:45:45.723312 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:45:45.723316 | orchestrator | Monday 09 March 2026 00:45:38 +0000 (0:00:00.191) 0:00:58.992 ********** 2026-03-09 00:45:45.723320 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-09 00:45:45.723324 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-09 00:45:45.723329 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-09 00:45:45.723332 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-09 00:45:45.723336 | orchestrator | 2026-03-09 00:45:45.723340 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:45:45.723344 | orchestrator | Monday 09 March 2026 00:45:39 +0000 (0:00:00.607) 0:00:59.599 ********** 2026-03-09 00:45:45.723348 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:45.723352 | orchestrator | 2026-03-09 00:45:45.723356 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:45:45.723359 | orchestrator | Monday 09 March 2026 00:45:39 +0000 (0:00:00.189) 0:00:59.788 ********** 2026-03-09 00:45:45.723363 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:45.723367 | orchestrator | 2026-03-09 00:45:45.723370 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:45:45.723374 | orchestrator | Monday 09 March 2026 00:45:39 +0000 (0:00:00.179) 0:00:59.968 ********** 2026-03-09 00:45:45.723378 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:45.723382 | orchestrator | 2026-03-09 00:45:45.723385 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:45:45.723389 | orchestrator | Monday 09 March 2026 00:45:40 +0000 (0:00:00.200) 0:01:00.169 ********** 2026-03-09 00:45:45.723397 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:45.723400 | orchestrator | 2026-03-09 00:45:45.723404 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-09 00:45:45.723408 | orchestrator | Monday 09 March 2026 00:45:40 +0000 (0:00:00.209) 0:01:00.378 ********** 2026-03-09 00:45:45.723412 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:45.723415 | orchestrator | 2026-03-09 00:45:45.723419 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-09 00:45:45.723425 | orchestrator | Monday 09 March 2026 00:45:40 +0000 (0:00:00.323) 0:01:00.701 ********** 2026-03-09 00:45:45.723431 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e95d8336-562c-5e60-938c-e1db43f5f553'}}) 2026-03-09 00:45:45.723437 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c56389c1-f3b1-5ba6-b160-f425a16b3e47'}}) 2026-03-09 00:45:45.723444 | orchestrator | 2026-03-09 00:45:45.723450 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-09 00:45:45.723457 | orchestrator | Monday 09 March 2026 00:45:40 +0000 (0:00:00.206) 0:01:00.908 ********** 2026-03-09 00:45:45.723464 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e95d8336-562c-5e60-938c-e1db43f5f553', 'data_vg': 'ceph-e95d8336-562c-5e60-938c-e1db43f5f553'}) 2026-03-09 00:45:45.723471 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c56389c1-f3b1-5ba6-b160-f425a16b3e47', 'data_vg': 'ceph-c56389c1-f3b1-5ba6-b160-f425a16b3e47'}) 2026-03-09 00:45:45.723478 | orchestrator | 2026-03-09 00:45:45.723484 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-09 00:45:45.723532 | orchestrator | Monday 09 March 2026 00:45:42 +0000 (0:00:01.889) 0:01:02.797 ********** 2026-03-09 00:45:45.723537 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e95d8336-562c-5e60-938c-e1db43f5f553', 'data_vg': 'ceph-e95d8336-562c-5e60-938c-e1db43f5f553'})  2026-03-09 00:45:45.723543 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c56389c1-f3b1-5ba6-b160-f425a16b3e47', 'data_vg': 'ceph-c56389c1-f3b1-5ba6-b160-f425a16b3e47'})  2026-03-09 00:45:45.723547 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:45.723550 | orchestrator | 2026-03-09 00:45:45.723554 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-09 00:45:45.723558 | orchestrator | Monday 09 March 2026 00:45:42 +0000 (0:00:00.151) 0:01:02.949 ********** 2026-03-09 00:45:45.723562 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e95d8336-562c-5e60-938c-e1db43f5f553', 'data_vg': 'ceph-e95d8336-562c-5e60-938c-e1db43f5f553'}) 2026-03-09 00:45:45.723566 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c56389c1-f3b1-5ba6-b160-f425a16b3e47', 'data_vg': 'ceph-c56389c1-f3b1-5ba6-b160-f425a16b3e47'}) 2026-03-09 00:45:45.723569 | orchestrator | 2026-03-09 00:45:45.723573 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-09 00:45:45.723577 | orchestrator | Monday 09 March 2026 00:45:44 +0000 (0:00:01.303) 0:01:04.252 ********** 2026-03-09 00:45:45.723581 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e95d8336-562c-5e60-938c-e1db43f5f553', 'data_vg': 'ceph-e95d8336-562c-5e60-938c-e1db43f5f553'})  2026-03-09 00:45:45.723584 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c56389c1-f3b1-5ba6-b160-f425a16b3e47', 'data_vg': 'ceph-c56389c1-f3b1-5ba6-b160-f425a16b3e47'})  2026-03-09 00:45:45.723588 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:45.723592 | orchestrator | 2026-03-09 00:45:45.723596 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-09 00:45:45.723600 | orchestrator | Monday 09 March 2026 00:45:44 +0000 (0:00:00.156) 0:01:04.409 ********** 2026-03-09 00:45:45.723604 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:45.723608 | orchestrator | 2026-03-09 00:45:45.723611 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-09 00:45:45.723615 | orchestrator | Monday 09 March 2026 00:45:44 +0000 (0:00:00.142) 0:01:04.552 ********** 2026-03-09 00:45:45.723623 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e95d8336-562c-5e60-938c-e1db43f5f553', 'data_vg': 'ceph-e95d8336-562c-5e60-938c-e1db43f5f553'})  2026-03-09 00:45:45.723628 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c56389c1-f3b1-5ba6-b160-f425a16b3e47', 'data_vg': 'ceph-c56389c1-f3b1-5ba6-b160-f425a16b3e47'})  2026-03-09 00:45:45.723633 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:45.723637 | orchestrator | 2026-03-09 00:45:45.723641 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-09 00:45:45.723646 | orchestrator | Monday 09 March 2026 00:45:44 +0000 (0:00:00.187) 0:01:04.739 ********** 2026-03-09 00:45:45.723650 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:45.723654 | orchestrator | 2026-03-09 00:45:45.723659 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-09 00:45:45.723669 | orchestrator | Monday 09 March 2026 00:45:44 +0000 (0:00:00.141) 0:01:04.881 ********** 2026-03-09 00:45:45.723673 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e95d8336-562c-5e60-938c-e1db43f5f553', 'data_vg': 'ceph-e95d8336-562c-5e60-938c-e1db43f5f553'})  2026-03-09 00:45:45.723677 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c56389c1-f3b1-5ba6-b160-f425a16b3e47', 'data_vg': 'ceph-c56389c1-f3b1-5ba6-b160-f425a16b3e47'})  2026-03-09 00:45:45.723682 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:45.723686 | orchestrator | 2026-03-09 00:45:45.723691 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-09 00:45:45.723700 | orchestrator | Monday 09 March 2026 00:45:44 +0000 (0:00:00.173) 0:01:05.055 ********** 2026-03-09 00:45:45.723705 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:45.723709 | orchestrator | 2026-03-09 00:45:45.723714 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-09 00:45:45.723718 | orchestrator | Monday 09 March 2026 00:45:45 +0000 (0:00:00.132) 0:01:05.187 ********** 2026-03-09 00:45:45.723723 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e95d8336-562c-5e60-938c-e1db43f5f553', 'data_vg': 'ceph-e95d8336-562c-5e60-938c-e1db43f5f553'})  2026-03-09 00:45:45.723728 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c56389c1-f3b1-5ba6-b160-f425a16b3e47', 'data_vg': 'ceph-c56389c1-f3b1-5ba6-b160-f425a16b3e47'})  2026-03-09 00:45:45.723734 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:45.723740 | orchestrator | 2026-03-09 00:45:45.723746 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-09 00:45:45.723753 | orchestrator | Monday 09 March 2026 00:45:45 +0000 (0:00:00.161) 0:01:05.349 ********** 2026-03-09 00:45:45.723759 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:45:45.723766 | orchestrator | 2026-03-09 00:45:45.723772 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-09 00:45:45.723778 | orchestrator | Monday 09 March 2026 00:45:45 +0000 (0:00:00.382) 0:01:05.731 ********** 2026-03-09 00:45:45.723786 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e95d8336-562c-5e60-938c-e1db43f5f553', 'data_vg': 'ceph-e95d8336-562c-5e60-938c-e1db43f5f553'})  2026-03-09 00:45:52.047923 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c56389c1-f3b1-5ba6-b160-f425a16b3e47', 'data_vg': 'ceph-c56389c1-f3b1-5ba6-b160-f425a16b3e47'})  2026-03-09 00:45:52.048032 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:52.048051 | orchestrator | 2026-03-09 00:45:52.048066 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-09 00:45:52.048082 | orchestrator | Monday 09 March 2026 00:45:45 +0000 (0:00:00.164) 0:01:05.895 ********** 2026-03-09 00:45:52.048096 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e95d8336-562c-5e60-938c-e1db43f5f553', 'data_vg': 'ceph-e95d8336-562c-5e60-938c-e1db43f5f553'})  2026-03-09 00:45:52.048109 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c56389c1-f3b1-5ba6-b160-f425a16b3e47', 'data_vg': 'ceph-c56389c1-f3b1-5ba6-b160-f425a16b3e47'})  2026-03-09 00:45:52.048147 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:52.048161 | orchestrator | 2026-03-09 00:45:52.048174 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-09 00:45:52.048187 | orchestrator | Monday 09 March 2026 00:45:45 +0000 (0:00:00.151) 0:01:06.047 ********** 2026-03-09 00:45:52.048201 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e95d8336-562c-5e60-938c-e1db43f5f553', 'data_vg': 'ceph-e95d8336-562c-5e60-938c-e1db43f5f553'})  2026-03-09 00:45:52.048214 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c56389c1-f3b1-5ba6-b160-f425a16b3e47', 'data_vg': 'ceph-c56389c1-f3b1-5ba6-b160-f425a16b3e47'})  2026-03-09 00:45:52.048227 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:52.048240 | orchestrator | 2026-03-09 00:45:52.048253 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-09 00:45:52.048280 | orchestrator | Monday 09 March 2026 00:45:46 +0000 (0:00:00.169) 0:01:06.216 ********** 2026-03-09 00:45:52.048294 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:52.048307 | orchestrator | 2026-03-09 00:45:52.048320 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-09 00:45:52.048333 | orchestrator | Monday 09 March 2026 00:45:46 +0000 (0:00:00.137) 0:01:06.354 ********** 2026-03-09 00:45:52.048346 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:52.048359 | orchestrator | 2026-03-09 00:45:52.048371 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-09 00:45:52.048383 | orchestrator | Monday 09 March 2026 00:45:46 +0000 (0:00:00.135) 0:01:06.489 ********** 2026-03-09 00:45:52.048396 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:52.048408 | orchestrator | 2026-03-09 00:45:52.048421 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-09 00:45:52.048434 | orchestrator | Monday 09 March 2026 00:45:46 +0000 (0:00:00.145) 0:01:06.634 ********** 2026-03-09 00:45:52.048447 | orchestrator | ok: [testbed-node-5] => { 2026-03-09 00:45:52.048461 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-09 00:45:52.048474 | orchestrator | } 2026-03-09 00:45:52.048511 | orchestrator | 2026-03-09 00:45:52.048527 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-09 00:45:52.048542 | orchestrator | Monday 09 March 2026 00:45:46 +0000 (0:00:00.146) 0:01:06.781 ********** 2026-03-09 00:45:52.048556 | orchestrator | ok: [testbed-node-5] => { 2026-03-09 00:45:52.048570 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-09 00:45:52.048584 | orchestrator | } 2026-03-09 00:45:52.048598 | orchestrator | 2026-03-09 00:45:52.048612 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-09 00:45:52.048626 | orchestrator | Monday 09 March 2026 00:45:46 +0000 (0:00:00.160) 0:01:06.941 ********** 2026-03-09 00:45:52.048640 | orchestrator | ok: [testbed-node-5] => { 2026-03-09 00:45:52.048654 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-09 00:45:52.048668 | orchestrator | } 2026-03-09 00:45:52.048681 | orchestrator | 2026-03-09 00:45:52.048695 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-09 00:45:52.048709 | orchestrator | Monday 09 March 2026 00:45:46 +0000 (0:00:00.137) 0:01:07.079 ********** 2026-03-09 00:45:52.048723 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:45:52.048737 | orchestrator | 2026-03-09 00:45:52.048751 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-09 00:45:52.048765 | orchestrator | Monday 09 March 2026 00:45:47 +0000 (0:00:00.507) 0:01:07.586 ********** 2026-03-09 00:45:52.048779 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:45:52.048793 | orchestrator | 2026-03-09 00:45:52.048807 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-09 00:45:52.048820 | orchestrator | Monday 09 March 2026 00:45:48 +0000 (0:00:00.560) 0:01:08.147 ********** 2026-03-09 00:45:52.048833 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:45:52.048856 | orchestrator | 2026-03-09 00:45:52.048869 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-09 00:45:52.048882 | orchestrator | Monday 09 March 2026 00:45:48 +0000 (0:00:00.774) 0:01:08.921 ********** 2026-03-09 00:45:52.048895 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:45:52.048907 | orchestrator | 2026-03-09 00:45:52.048920 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-09 00:45:52.048932 | orchestrator | Monday 09 March 2026 00:45:48 +0000 (0:00:00.150) 0:01:09.072 ********** 2026-03-09 00:45:52.048944 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:52.048958 | orchestrator | 2026-03-09 00:45:52.048972 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-09 00:45:52.048984 | orchestrator | Monday 09 March 2026 00:45:49 +0000 (0:00:00.124) 0:01:09.197 ********** 2026-03-09 00:45:52.048997 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:52.049010 | orchestrator | 2026-03-09 00:45:52.049023 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-09 00:45:52.049037 | orchestrator | Monday 09 March 2026 00:45:49 +0000 (0:00:00.116) 0:01:09.313 ********** 2026-03-09 00:45:52.049051 | orchestrator | ok: [testbed-node-5] => { 2026-03-09 00:45:52.049066 | orchestrator |  "vgs_report": { 2026-03-09 00:45:52.049080 | orchestrator |  "vg": [] 2026-03-09 00:45:52.049114 | orchestrator |  } 2026-03-09 00:45:52.049129 | orchestrator | } 2026-03-09 00:45:52.049143 | orchestrator | 2026-03-09 00:45:52.049157 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-09 00:45:52.049171 | orchestrator | Monday 09 March 2026 00:45:49 +0000 (0:00:00.153) 0:01:09.466 ********** 2026-03-09 00:45:52.049185 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:52.049198 | orchestrator | 2026-03-09 00:45:52.049211 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-09 00:45:52.049225 | orchestrator | Monday 09 March 2026 00:45:49 +0000 (0:00:00.134) 0:01:09.601 ********** 2026-03-09 00:45:52.049239 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:52.049253 | orchestrator | 2026-03-09 00:45:52.049266 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-09 00:45:52.049280 | orchestrator | Monday 09 March 2026 00:45:49 +0000 (0:00:00.141) 0:01:09.743 ********** 2026-03-09 00:45:52.049293 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:52.049306 | orchestrator | 2026-03-09 00:45:52.049320 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-09 00:45:52.049334 | orchestrator | Monday 09 March 2026 00:45:49 +0000 (0:00:00.139) 0:01:09.882 ********** 2026-03-09 00:45:52.049348 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:52.049361 | orchestrator | 2026-03-09 00:45:52.049375 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-09 00:45:52.049389 | orchestrator | Monday 09 March 2026 00:45:49 +0000 (0:00:00.157) 0:01:10.039 ********** 2026-03-09 00:45:52.049401 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:52.049415 | orchestrator | 2026-03-09 00:45:52.049429 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-09 00:45:52.049443 | orchestrator | Monday 09 March 2026 00:45:50 +0000 (0:00:00.146) 0:01:10.186 ********** 2026-03-09 00:45:52.049456 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:52.049470 | orchestrator | 2026-03-09 00:45:52.049484 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-09 00:45:52.049522 | orchestrator | Monday 09 March 2026 00:45:50 +0000 (0:00:00.147) 0:01:10.333 ********** 2026-03-09 00:45:52.049537 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:52.049550 | orchestrator | 2026-03-09 00:45:52.049563 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-09 00:45:52.049577 | orchestrator | Monday 09 March 2026 00:45:50 +0000 (0:00:00.140) 0:01:10.473 ********** 2026-03-09 00:45:52.049591 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:52.049605 | orchestrator | 2026-03-09 00:45:52.049619 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-09 00:45:52.049642 | orchestrator | Monday 09 March 2026 00:45:50 +0000 (0:00:00.369) 0:01:10.842 ********** 2026-03-09 00:45:52.049655 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:52.049669 | orchestrator | 2026-03-09 00:45:52.049682 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-09 00:45:52.049694 | orchestrator | Monday 09 March 2026 00:45:50 +0000 (0:00:00.159) 0:01:11.002 ********** 2026-03-09 00:45:52.049705 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:52.049717 | orchestrator | 2026-03-09 00:45:52.049728 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-09 00:45:52.049740 | orchestrator | Monday 09 March 2026 00:45:51 +0000 (0:00:00.158) 0:01:11.161 ********** 2026-03-09 00:45:52.049751 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:52.049762 | orchestrator | 2026-03-09 00:45:52.049774 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-09 00:45:52.049786 | orchestrator | Monday 09 March 2026 00:45:51 +0000 (0:00:00.143) 0:01:11.304 ********** 2026-03-09 00:45:52.049798 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:52.049809 | orchestrator | 2026-03-09 00:45:52.049821 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-09 00:45:52.049833 | orchestrator | Monday 09 March 2026 00:45:51 +0000 (0:00:00.134) 0:01:11.439 ********** 2026-03-09 00:45:52.049844 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:52.049856 | orchestrator | 2026-03-09 00:45:52.049867 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-09 00:45:52.049879 | orchestrator | Monday 09 March 2026 00:45:51 +0000 (0:00:00.159) 0:01:11.599 ********** 2026-03-09 00:45:52.049890 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:52.049901 | orchestrator | 2026-03-09 00:45:52.049913 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-09 00:45:52.049924 | orchestrator | Monday 09 March 2026 00:45:51 +0000 (0:00:00.146) 0:01:11.746 ********** 2026-03-09 00:45:52.049936 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e95d8336-562c-5e60-938c-e1db43f5f553', 'data_vg': 'ceph-e95d8336-562c-5e60-938c-e1db43f5f553'})  2026-03-09 00:45:52.049948 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c56389c1-f3b1-5ba6-b160-f425a16b3e47', 'data_vg': 'ceph-c56389c1-f3b1-5ba6-b160-f425a16b3e47'})  2026-03-09 00:45:52.049960 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:52.049972 | orchestrator | 2026-03-09 00:45:52.049983 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-09 00:45:52.049994 | orchestrator | Monday 09 March 2026 00:45:51 +0000 (0:00:00.148) 0:01:11.894 ********** 2026-03-09 00:45:52.050006 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e95d8336-562c-5e60-938c-e1db43f5f553', 'data_vg': 'ceph-e95d8336-562c-5e60-938c-e1db43f5f553'})  2026-03-09 00:45:52.050082 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c56389c1-f3b1-5ba6-b160-f425a16b3e47', 'data_vg': 'ceph-c56389c1-f3b1-5ba6-b160-f425a16b3e47'})  2026-03-09 00:45:52.050094 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:52.050106 | orchestrator | 2026-03-09 00:45:52.050118 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-09 00:45:52.050130 | orchestrator | Monday 09 March 2026 00:45:51 +0000 (0:00:00.155) 0:01:12.050 ********** 2026-03-09 00:45:52.050151 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e95d8336-562c-5e60-938c-e1db43f5f553', 'data_vg': 'ceph-e95d8336-562c-5e60-938c-e1db43f5f553'})  2026-03-09 00:45:55.186536 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c56389c1-f3b1-5ba6-b160-f425a16b3e47', 'data_vg': 'ceph-c56389c1-f3b1-5ba6-b160-f425a16b3e47'})  2026-03-09 00:45:55.186649 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:55.186675 | orchestrator | 2026-03-09 00:45:55.186696 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-09 00:45:55.186716 | orchestrator | Monday 09 March 2026 00:45:52 +0000 (0:00:00.178) 0:01:12.228 ********** 2026-03-09 00:45:55.186778 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e95d8336-562c-5e60-938c-e1db43f5f553', 'data_vg': 'ceph-e95d8336-562c-5e60-938c-e1db43f5f553'})  2026-03-09 00:45:55.186798 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c56389c1-f3b1-5ba6-b160-f425a16b3e47', 'data_vg': 'ceph-c56389c1-f3b1-5ba6-b160-f425a16b3e47'})  2026-03-09 00:45:55.186815 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:55.186833 | orchestrator | 2026-03-09 00:45:55.186850 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-09 00:45:55.186868 | orchestrator | Monday 09 March 2026 00:45:52 +0000 (0:00:00.158) 0:01:12.387 ********** 2026-03-09 00:45:55.186885 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e95d8336-562c-5e60-938c-e1db43f5f553', 'data_vg': 'ceph-e95d8336-562c-5e60-938c-e1db43f5f553'})  2026-03-09 00:45:55.186923 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c56389c1-f3b1-5ba6-b160-f425a16b3e47', 'data_vg': 'ceph-c56389c1-f3b1-5ba6-b160-f425a16b3e47'})  2026-03-09 00:45:55.186942 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:55.186960 | orchestrator | 2026-03-09 00:45:55.186977 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-09 00:45:55.186995 | orchestrator | Monday 09 March 2026 00:45:52 +0000 (0:00:00.162) 0:01:12.550 ********** 2026-03-09 00:45:55.187013 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e95d8336-562c-5e60-938c-e1db43f5f553', 'data_vg': 'ceph-e95d8336-562c-5e60-938c-e1db43f5f553'})  2026-03-09 00:45:55.187031 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c56389c1-f3b1-5ba6-b160-f425a16b3e47', 'data_vg': 'ceph-c56389c1-f3b1-5ba6-b160-f425a16b3e47'})  2026-03-09 00:45:55.187051 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:55.187069 | orchestrator | 2026-03-09 00:45:55.187088 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-09 00:45:55.187107 | orchestrator | Monday 09 March 2026 00:45:52 +0000 (0:00:00.390) 0:01:12.940 ********** 2026-03-09 00:45:55.187127 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e95d8336-562c-5e60-938c-e1db43f5f553', 'data_vg': 'ceph-e95d8336-562c-5e60-938c-e1db43f5f553'})  2026-03-09 00:45:55.187147 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c56389c1-f3b1-5ba6-b160-f425a16b3e47', 'data_vg': 'ceph-c56389c1-f3b1-5ba6-b160-f425a16b3e47'})  2026-03-09 00:45:55.187166 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:55.187184 | orchestrator | 2026-03-09 00:45:55.187203 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-09 00:45:55.187223 | orchestrator | Monday 09 March 2026 00:45:53 +0000 (0:00:00.158) 0:01:13.098 ********** 2026-03-09 00:45:55.187242 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e95d8336-562c-5e60-938c-e1db43f5f553', 'data_vg': 'ceph-e95d8336-562c-5e60-938c-e1db43f5f553'})  2026-03-09 00:45:55.187260 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c56389c1-f3b1-5ba6-b160-f425a16b3e47', 'data_vg': 'ceph-c56389c1-f3b1-5ba6-b160-f425a16b3e47'})  2026-03-09 00:45:55.187280 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:55.187299 | orchestrator | 2026-03-09 00:45:55.187313 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-09 00:45:55.187325 | orchestrator | Monday 09 March 2026 00:45:53 +0000 (0:00:00.158) 0:01:13.257 ********** 2026-03-09 00:45:55.187345 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:45:55.187358 | orchestrator | 2026-03-09 00:45:55.187371 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-09 00:45:55.187384 | orchestrator | Monday 09 March 2026 00:45:53 +0000 (0:00:00.484) 0:01:13.741 ********** 2026-03-09 00:45:55.187397 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:45:55.187407 | orchestrator | 2026-03-09 00:45:55.187418 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-09 00:45:55.187447 | orchestrator | Monday 09 March 2026 00:45:54 +0000 (0:00:00.489) 0:01:14.232 ********** 2026-03-09 00:45:55.187466 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:45:55.187486 | orchestrator | 2026-03-09 00:45:55.187555 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-09 00:45:55.187567 | orchestrator | Monday 09 March 2026 00:45:54 +0000 (0:00:00.151) 0:01:14.384 ********** 2026-03-09 00:45:55.187578 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-c56389c1-f3b1-5ba6-b160-f425a16b3e47', 'vg_name': 'ceph-c56389c1-f3b1-5ba6-b160-f425a16b3e47'}) 2026-03-09 00:45:55.187590 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-e95d8336-562c-5e60-938c-e1db43f5f553', 'vg_name': 'ceph-e95d8336-562c-5e60-938c-e1db43f5f553'}) 2026-03-09 00:45:55.187601 | orchestrator | 2026-03-09 00:45:55.187612 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-09 00:45:55.187623 | orchestrator | Monday 09 March 2026 00:45:54 +0000 (0:00:00.177) 0:01:14.561 ********** 2026-03-09 00:45:55.187658 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e95d8336-562c-5e60-938c-e1db43f5f553', 'data_vg': 'ceph-e95d8336-562c-5e60-938c-e1db43f5f553'})  2026-03-09 00:45:55.187670 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c56389c1-f3b1-5ba6-b160-f425a16b3e47', 'data_vg': 'ceph-c56389c1-f3b1-5ba6-b160-f425a16b3e47'})  2026-03-09 00:45:55.187681 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:55.187692 | orchestrator | 2026-03-09 00:45:55.187703 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-09 00:45:55.187713 | orchestrator | Monday 09 March 2026 00:45:54 +0000 (0:00:00.160) 0:01:14.722 ********** 2026-03-09 00:45:55.187724 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e95d8336-562c-5e60-938c-e1db43f5f553', 'data_vg': 'ceph-e95d8336-562c-5e60-938c-e1db43f5f553'})  2026-03-09 00:45:55.187735 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c56389c1-f3b1-5ba6-b160-f425a16b3e47', 'data_vg': 'ceph-c56389c1-f3b1-5ba6-b160-f425a16b3e47'})  2026-03-09 00:45:55.187746 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:55.187756 | orchestrator | 2026-03-09 00:45:55.187772 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-09 00:45:55.187787 | orchestrator | Monday 09 March 2026 00:45:54 +0000 (0:00:00.171) 0:01:14.893 ********** 2026-03-09 00:45:55.187798 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e95d8336-562c-5e60-938c-e1db43f5f553', 'data_vg': 'ceph-e95d8336-562c-5e60-938c-e1db43f5f553'})  2026-03-09 00:45:55.187809 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c56389c1-f3b1-5ba6-b160-f425a16b3e47', 'data_vg': 'ceph-c56389c1-f3b1-5ba6-b160-f425a16b3e47'})  2026-03-09 00:45:55.187820 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:45:55.187839 | orchestrator | 2026-03-09 00:45:55.187853 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-09 00:45:55.187866 | orchestrator | Monday 09 March 2026 00:45:54 +0000 (0:00:00.152) 0:01:15.045 ********** 2026-03-09 00:45:55.187883 | orchestrator | ok: [testbed-node-5] => { 2026-03-09 00:45:55.187901 | orchestrator |  "lvm_report": { 2026-03-09 00:45:55.187920 | orchestrator |  "lv": [ 2026-03-09 00:45:55.187938 | orchestrator |  { 2026-03-09 00:45:55.187956 | orchestrator |  "lv_name": "osd-block-c56389c1-f3b1-5ba6-b160-f425a16b3e47", 2026-03-09 00:45:55.187974 | orchestrator |  "vg_name": "ceph-c56389c1-f3b1-5ba6-b160-f425a16b3e47" 2026-03-09 00:45:55.187992 | orchestrator |  }, 2026-03-09 00:45:55.188011 | orchestrator |  { 2026-03-09 00:45:55.188028 | orchestrator |  "lv_name": "osd-block-e95d8336-562c-5e60-938c-e1db43f5f553", 2026-03-09 00:45:55.188045 | orchestrator |  "vg_name": "ceph-e95d8336-562c-5e60-938c-e1db43f5f553" 2026-03-09 00:45:55.188063 | orchestrator |  } 2026-03-09 00:45:55.188081 | orchestrator |  ], 2026-03-09 00:45:55.188100 | orchestrator |  "pv": [ 2026-03-09 00:45:55.188132 | orchestrator |  { 2026-03-09 00:45:55.188150 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-09 00:45:55.188170 | orchestrator |  "vg_name": "ceph-e95d8336-562c-5e60-938c-e1db43f5f553" 2026-03-09 00:45:55.188188 | orchestrator |  }, 2026-03-09 00:45:55.188206 | orchestrator |  { 2026-03-09 00:45:55.188226 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-09 00:45:55.188244 | orchestrator |  "vg_name": "ceph-c56389c1-f3b1-5ba6-b160-f425a16b3e47" 2026-03-09 00:45:55.188261 | orchestrator |  } 2026-03-09 00:45:55.188280 | orchestrator |  ] 2026-03-09 00:45:55.188294 | orchestrator |  } 2026-03-09 00:45:55.188305 | orchestrator | } 2026-03-09 00:45:55.188317 | orchestrator | 2026-03-09 00:45:55.188327 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:45:55.188338 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-09 00:45:55.188349 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-09 00:45:55.188360 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-09 00:45:55.188371 | orchestrator | 2026-03-09 00:45:55.188381 | orchestrator | 2026-03-09 00:45:55.188396 | orchestrator | 2026-03-09 00:45:55.188412 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:45:55.188423 | orchestrator | Monday 09 March 2026 00:45:55 +0000 (0:00:00.209) 0:01:15.255 ********** 2026-03-09 00:45:55.188434 | orchestrator | =============================================================================== 2026-03-09 00:45:55.188448 | orchestrator | Create block VGs -------------------------------------------------------- 5.89s 2026-03-09 00:45:55.188464 | orchestrator | Create block LVs -------------------------------------------------------- 4.05s 2026-03-09 00:45:55.188475 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.80s 2026-03-09 00:45:55.188512 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.71s 2026-03-09 00:45:55.188538 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.60s 2026-03-09 00:45:55.188550 | orchestrator | Add known partitions to the list of available block devices ------------- 1.59s 2026-03-09 00:45:55.188561 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.54s 2026-03-09 00:45:55.188572 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.49s 2026-03-09 00:45:55.188595 | orchestrator | Add known links to the list of available block devices ------------------ 1.49s 2026-03-09 00:45:55.684429 | orchestrator | Add known partitions to the list of available block devices ------------- 1.29s 2026-03-09 00:45:55.684592 | orchestrator | Print LVM report data --------------------------------------------------- 1.04s 2026-03-09 00:45:55.684611 | orchestrator | Add known links to the list of available block devices ------------------ 1.04s 2026-03-09 00:45:55.684623 | orchestrator | Add known partitions to the list of available block devices ------------- 0.93s 2026-03-09 00:45:55.684634 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.77s 2026-03-09 00:45:55.684645 | orchestrator | Get initial list of available block devices ----------------------------- 0.74s 2026-03-09 00:45:55.684656 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2026-03-09 00:45:55.684667 | orchestrator | Add known links to the list of available block devices ------------------ 0.74s 2026-03-09 00:45:55.684678 | orchestrator | Print 'Create WAL LVs for ceph_db_wal_devices' -------------------------- 0.73s 2026-03-09 00:45:55.684689 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2026-03-09 00:45:55.684700 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.71s 2026-03-09 00:46:08.245092 | orchestrator | 2026-03-09 00:46:08 | INFO  | Prepare task for execution of facts. 2026-03-09 00:46:08.325140 | orchestrator | 2026-03-09 00:46:08 | INFO  | Task 12714a7b-bfc7-4b7c-b72f-5ed73a4eec1f (facts) was prepared for execution. 2026-03-09 00:46:08.325250 | orchestrator | 2026-03-09 00:46:08 | INFO  | It takes a moment until task 12714a7b-bfc7-4b7c-b72f-5ed73a4eec1f (facts) has been started and output is visible here. 2026-03-09 00:46:22.108114 | orchestrator | 2026-03-09 00:46:22.108221 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-09 00:46:22.108236 | orchestrator | 2026-03-09 00:46:22.108244 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-09 00:46:22.108252 | orchestrator | Monday 09 March 2026 00:46:13 +0000 (0:00:00.367) 0:00:00.367 ********** 2026-03-09 00:46:22.108260 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:46:22.108269 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:46:22.108276 | orchestrator | ok: [testbed-manager] 2026-03-09 00:46:22.108284 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:46:22.108292 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:46:22.108299 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:46:22.108306 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:46:22.108313 | orchestrator | 2026-03-09 00:46:22.108321 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-09 00:46:22.108327 | orchestrator | Monday 09 March 2026 00:46:14 +0000 (0:00:01.197) 0:00:01.564 ********** 2026-03-09 00:46:22.108334 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:46:22.108342 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:46:22.108349 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:46:22.108355 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:46:22.108362 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:22.108369 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:22.108376 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:46:22.108382 | orchestrator | 2026-03-09 00:46:22.108389 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-09 00:46:22.108397 | orchestrator | 2026-03-09 00:46:22.108404 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-09 00:46:22.108411 | orchestrator | Monday 09 March 2026 00:46:15 +0000 (0:00:01.277) 0:00:02.842 ********** 2026-03-09 00:46:22.108419 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:46:22.108427 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:46:22.108434 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:46:22.108442 | orchestrator | ok: [testbed-manager] 2026-03-09 00:46:22.108449 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:46:22.108456 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:46:22.108463 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:46:22.108470 | orchestrator | 2026-03-09 00:46:22.108477 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-09 00:46:22.108554 | orchestrator | 2026-03-09 00:46:22.108562 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-09 00:46:22.108570 | orchestrator | Monday 09 March 2026 00:46:21 +0000 (0:00:05.464) 0:00:08.307 ********** 2026-03-09 00:46:22.108577 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:46:22.108584 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:46:22.108591 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:46:22.108597 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:46:22.108604 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:22.108611 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:22.108619 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:46:22.108626 | orchestrator | 2026-03-09 00:46:22.108633 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:46:22.108641 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:46:22.108650 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:46:22.108682 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:46:22.108691 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:46:22.108699 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:46:22.108706 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:46:22.108714 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:46:22.108721 | orchestrator | 2026-03-09 00:46:22.108728 | orchestrator | 2026-03-09 00:46:22.108735 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:46:22.108743 | orchestrator | Monday 09 March 2026 00:46:21 +0000 (0:00:00.608) 0:00:08.915 ********** 2026-03-09 00:46:22.108751 | orchestrator | =============================================================================== 2026-03-09 00:46:22.108759 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.46s 2026-03-09 00:46:22.108767 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.28s 2026-03-09 00:46:22.108774 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.20s 2026-03-09 00:46:22.108782 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.61s 2026-03-09 00:46:34.791050 | orchestrator | 2026-03-09 00:46:34 | INFO  | Prepare task for execution of frr. 2026-03-09 00:46:34.867769 | orchestrator | 2026-03-09 00:46:34 | INFO  | Task 154078bd-eaef-462a-8654-360c93de538b (frr) was prepared for execution. 2026-03-09 00:46:34.867860 | orchestrator | 2026-03-09 00:46:34 | INFO  | It takes a moment until task 154078bd-eaef-462a-8654-360c93de538b (frr) has been started and output is visible here. 2026-03-09 00:47:03.583232 | orchestrator | 2026-03-09 00:47:03.583384 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-09 00:47:03.583407 | orchestrator | 2026-03-09 00:47:03.583423 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-09 00:47:03.583438 | orchestrator | Monday 09 March 2026 00:46:39 +0000 (0:00:00.238) 0:00:00.238 ********** 2026-03-09 00:47:03.583452 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-09 00:47:03.583468 | orchestrator | 2026-03-09 00:47:03.584551 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-09 00:47:03.584646 | orchestrator | Monday 09 March 2026 00:46:39 +0000 (0:00:00.227) 0:00:00.465 ********** 2026-03-09 00:47:03.584688 | orchestrator | changed: [testbed-manager] 2026-03-09 00:47:03.584722 | orchestrator | 2026-03-09 00:47:03.584740 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-09 00:47:03.584758 | orchestrator | Monday 09 March 2026 00:46:40 +0000 (0:00:01.276) 0:00:01.741 ********** 2026-03-09 00:47:03.584775 | orchestrator | changed: [testbed-manager] 2026-03-09 00:47:03.584792 | orchestrator | 2026-03-09 00:47:03.584810 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-09 00:47:03.584826 | orchestrator | Monday 09 March 2026 00:46:51 +0000 (0:00:10.511) 0:00:12.252 ********** 2026-03-09 00:47:03.584844 | orchestrator | ok: [testbed-manager] 2026-03-09 00:47:03.584862 | orchestrator | 2026-03-09 00:47:03.584880 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-09 00:47:03.584897 | orchestrator | Monday 09 March 2026 00:46:52 +0000 (0:00:01.214) 0:00:13.467 ********** 2026-03-09 00:47:03.584910 | orchestrator | changed: [testbed-manager] 2026-03-09 00:47:03.584949 | orchestrator | 2026-03-09 00:47:03.584960 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-09 00:47:03.584971 | orchestrator | Monday 09 March 2026 00:46:53 +0000 (0:00:01.047) 0:00:14.515 ********** 2026-03-09 00:47:03.584988 | orchestrator | ok: [testbed-manager] 2026-03-09 00:47:03.585005 | orchestrator | 2026-03-09 00:47:03.585023 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-03-09 00:47:03.585057 | orchestrator | Monday 09 March 2026 00:46:54 +0000 (0:00:01.291) 0:00:15.807 ********** 2026-03-09 00:47:03.585087 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:47:03.585098 | orchestrator | 2026-03-09 00:47:03.585108 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-03-09 00:47:03.585118 | orchestrator | Monday 09 March 2026 00:46:55 +0000 (0:00:00.161) 0:00:15.969 ********** 2026-03-09 00:47:03.585128 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:47:03.585138 | orchestrator | 2026-03-09 00:47:03.585148 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-03-09 00:47:03.585157 | orchestrator | Monday 09 March 2026 00:46:55 +0000 (0:00:00.162) 0:00:16.131 ********** 2026-03-09 00:47:03.585167 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:47:03.585177 | orchestrator | 2026-03-09 00:47:03.585186 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-09 00:47:03.585197 | orchestrator | Monday 09 March 2026 00:46:55 +0000 (0:00:00.170) 0:00:16.302 ********** 2026-03-09 00:47:03.585207 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:47:03.585217 | orchestrator | 2026-03-09 00:47:03.585227 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-09 00:47:03.585237 | orchestrator | Monday 09 March 2026 00:46:55 +0000 (0:00:00.149) 0:00:16.451 ********** 2026-03-09 00:47:03.585247 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:47:03.585257 | orchestrator | 2026-03-09 00:47:03.585271 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-09 00:47:03.585287 | orchestrator | Monday 09 March 2026 00:46:55 +0000 (0:00:00.146) 0:00:16.598 ********** 2026-03-09 00:47:03.585303 | orchestrator | changed: [testbed-manager] 2026-03-09 00:47:03.585320 | orchestrator | 2026-03-09 00:47:03.585337 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-09 00:47:03.585353 | orchestrator | Monday 09 March 2026 00:46:57 +0000 (0:00:01.276) 0:00:17.875 ********** 2026-03-09 00:47:03.585371 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-09 00:47:03.585387 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-09 00:47:03.585404 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-09 00:47:03.585415 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-09 00:47:03.585424 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-09 00:47:03.585436 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-09 00:47:03.585453 | orchestrator | 2026-03-09 00:47:03.585491 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-09 00:47:03.585511 | orchestrator | Monday 09 March 2026 00:47:00 +0000 (0:00:03.423) 0:00:21.299 ********** 2026-03-09 00:47:03.585527 | orchestrator | ok: [testbed-manager] 2026-03-09 00:47:03.585544 | orchestrator | 2026-03-09 00:47:03.585561 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-03-09 00:47:03.585577 | orchestrator | Monday 09 March 2026 00:47:01 +0000 (0:00:01.389) 0:00:22.688 ********** 2026-03-09 00:47:03.585594 | orchestrator | changed: [testbed-manager] 2026-03-09 00:47:03.585611 | orchestrator | 2026-03-09 00:47:03.585628 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:47:03.585657 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-09 00:47:03.585674 | orchestrator | 2026-03-09 00:47:03.585691 | orchestrator | 2026-03-09 00:47:03.585752 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:47:03.585769 | orchestrator | Monday 09 March 2026 00:47:03 +0000 (0:00:01.342) 0:00:24.030 ********** 2026-03-09 00:47:03.585785 | orchestrator | =============================================================================== 2026-03-09 00:47:03.585800 | orchestrator | osism.services.frr : Install frr package ------------------------------- 10.51s 2026-03-09 00:47:03.585816 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.42s 2026-03-09 00:47:03.585832 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.39s 2026-03-09 00:47:03.585848 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.34s 2026-03-09 00:47:03.585864 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.29s 2026-03-09 00:47:03.585881 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.28s 2026-03-09 00:47:03.585898 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.28s 2026-03-09 00:47:03.585915 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.21s 2026-03-09 00:47:03.585932 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.05s 2026-03-09 00:47:03.585950 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.23s 2026-03-09 00:47:03.585966 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.17s 2026-03-09 00:47:03.585983 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.16s 2026-03-09 00:47:03.586001 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.16s 2026-03-09 00:47:03.586081 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.15s 2026-03-09 00:47:03.586105 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.15s 2026-03-09 00:47:03.945216 | orchestrator | 2026-03-09 00:47:03.946976 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Mon Mar 9 00:47:03 UTC 2026 2026-03-09 00:47:03.947042 | orchestrator | 2026-03-09 00:47:06.024717 | orchestrator | 2026-03-09 00:47:06 | INFO  | Collection nutshell is prepared for execution 2026-03-09 00:47:06.024813 | orchestrator | 2026-03-09 00:47:06 | INFO  | A [0] - dotfiles 2026-03-09 00:47:16.080321 | orchestrator | 2026-03-09 00:47:16 | INFO  | A [0] - homer 2026-03-09 00:47:16.080419 | orchestrator | 2026-03-09 00:47:16 | INFO  | A [0] - netdata 2026-03-09 00:47:16.080430 | orchestrator | 2026-03-09 00:47:16 | INFO  | A [0] - openstackclient 2026-03-09 00:47:16.080438 | orchestrator | 2026-03-09 00:47:16 | INFO  | A [0] - phpmyadmin 2026-03-09 00:47:16.080445 | orchestrator | 2026-03-09 00:47:16 | INFO  | A [0] - common 2026-03-09 00:47:16.082595 | orchestrator | 2026-03-09 00:47:16 | INFO  | A [1] -- loadbalancer 2026-03-09 00:47:16.082749 | orchestrator | 2026-03-09 00:47:16 | INFO  | A [2] --- opensearch 2026-03-09 00:47:16.082762 | orchestrator | 2026-03-09 00:47:16 | INFO  | A [2] --- mariadb-ng 2026-03-09 00:47:16.082768 | orchestrator | 2026-03-09 00:47:16 | INFO  | A [3] ---- horizon 2026-03-09 00:47:16.082782 | orchestrator | 2026-03-09 00:47:16 | INFO  | A [3] ---- keystone 2026-03-09 00:47:16.082835 | orchestrator | 2026-03-09 00:47:16 | INFO  | A [4] ----- neutron 2026-03-09 00:47:16.082844 | orchestrator | 2026-03-09 00:47:16 | INFO  | A [5] ------ wait-for-nova 2026-03-09 00:47:16.083193 | orchestrator | 2026-03-09 00:47:16 | INFO  | A [6] ------- octavia 2026-03-09 00:47:16.084610 | orchestrator | 2026-03-09 00:47:16 | INFO  | A [4] ----- barbican 2026-03-09 00:47:16.084683 | orchestrator | 2026-03-09 00:47:16 | INFO  | A [4] ----- designate 2026-03-09 00:47:16.084694 | orchestrator | 2026-03-09 00:47:16 | INFO  | A [4] ----- ironic 2026-03-09 00:47:16.084701 | orchestrator | 2026-03-09 00:47:16 | INFO  | A [4] ----- placement 2026-03-09 00:47:16.084708 | orchestrator | 2026-03-09 00:47:16 | INFO  | A [4] ----- magnum 2026-03-09 00:47:16.084952 | orchestrator | 2026-03-09 00:47:16 | INFO  | A [1] -- openvswitch 2026-03-09 00:47:16.085011 | orchestrator | 2026-03-09 00:47:16 | INFO  | A [2] --- ovn 2026-03-09 00:47:16.085024 | orchestrator | 2026-03-09 00:47:16 | INFO  | A [1] -- memcached 2026-03-09 00:47:16.085188 | orchestrator | 2026-03-09 00:47:16 | INFO  | A [1] -- redis 2026-03-09 00:47:16.085205 | orchestrator | 2026-03-09 00:47:16 | INFO  | A [1] -- rabbitmq-ng 2026-03-09 00:47:16.085584 | orchestrator | 2026-03-09 00:47:16 | INFO  | A [0] - kubernetes 2026-03-09 00:47:16.087776 | orchestrator | 2026-03-09 00:47:16 | INFO  | A [1] -- kubeconfig 2026-03-09 00:47:16.087817 | orchestrator | 2026-03-09 00:47:16 | INFO  | A [1] -- copy-kubeconfig 2026-03-09 00:47:16.087826 | orchestrator | 2026-03-09 00:47:16 | INFO  | A [0] - ceph 2026-03-09 00:47:16.089914 | orchestrator | 2026-03-09 00:47:16 | INFO  | A [1] -- ceph-pools 2026-03-09 00:47:16.089954 | orchestrator | 2026-03-09 00:47:16 | INFO  | A [2] --- copy-ceph-keys 2026-03-09 00:47:16.089963 | orchestrator | 2026-03-09 00:47:16 | INFO  | A [3] ---- cephclient 2026-03-09 00:47:16.089971 | orchestrator | 2026-03-09 00:47:16 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-03-09 00:47:16.090138 | orchestrator | 2026-03-09 00:47:16 | INFO  | A [4] ----- wait-for-keystone 2026-03-09 00:47:16.090492 | orchestrator | 2026-03-09 00:47:16 | INFO  | A [5] ------ kolla-ceph-rgw 2026-03-09 00:47:16.090963 | orchestrator | 2026-03-09 00:47:16 | INFO  | A [5] ------ glance 2026-03-09 00:47:16.091011 | orchestrator | 2026-03-09 00:47:16 | INFO  | A [5] ------ cinder 2026-03-09 00:47:16.091771 | orchestrator | 2026-03-09 00:47:16 | INFO  | A [5] ------ nova 2026-03-09 00:47:16.091809 | orchestrator | 2026-03-09 00:47:16 | INFO  | A [4] ----- prometheus 2026-03-09 00:47:16.091822 | orchestrator | 2026-03-09 00:47:16 | INFO  | A [5] ------ grafana 2026-03-09 00:47:16.390004 | orchestrator | 2026-03-09 00:47:16 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-03-09 00:47:16.393902 | orchestrator | 2026-03-09 00:47:16 | INFO  | Tasks are running in the background 2026-03-09 00:47:20.070607 | orchestrator | 2026-03-09 00:47:20 | INFO  | No task IDs specified, wait for all currently running tasks 2026-03-09 00:47:22.233262 | orchestrator | 2026-03-09 00:47:22 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:47:22.240011 | orchestrator | 2026-03-09 00:47:22 | INFO  | Task ede5f393-d486-4556-9bd1-b5cd81caf3d1 is in state STARTED 2026-03-09 00:47:22.242163 | orchestrator | 2026-03-09 00:47:22 | INFO  | Task b17f1479-b83f-4a91-a270-198d81a2c237 is in state STARTED 2026-03-09 00:47:22.245163 | orchestrator | 2026-03-09 00:47:22 | INFO  | Task 8fa95c07-e0a0-4ffc-8349-3619badbd659 is in state STARTED 2026-03-09 00:47:22.247693 | orchestrator | 2026-03-09 00:47:22 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:47:22.250084 | orchestrator | 2026-03-09 00:47:22 | INFO  | Task 6e447993-960f-4006-ac76-298c42f6f912 is in state STARTED 2026-03-09 00:47:22.250818 | orchestrator | 2026-03-09 00:47:22 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:47:22.251025 | orchestrator | 2026-03-09 00:47:22 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:47:25.291914 | orchestrator | 2026-03-09 00:47:25 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:47:25.291992 | orchestrator | 2026-03-09 00:47:25 | INFO  | Task ede5f393-d486-4556-9bd1-b5cd81caf3d1 is in state STARTED 2026-03-09 00:47:25.292005 | orchestrator | 2026-03-09 00:47:25 | INFO  | Task b17f1479-b83f-4a91-a270-198d81a2c237 is in state STARTED 2026-03-09 00:47:25.292016 | orchestrator | 2026-03-09 00:47:25 | INFO  | Task 8fa95c07-e0a0-4ffc-8349-3619badbd659 is in state STARTED 2026-03-09 00:47:25.292592 | orchestrator | 2026-03-09 00:47:25 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:47:25.294194 | orchestrator | 2026-03-09 00:47:25 | INFO  | Task 6e447993-960f-4006-ac76-298c42f6f912 is in state STARTED 2026-03-09 00:47:25.294267 | orchestrator | 2026-03-09 00:47:25 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:47:25.295587 | orchestrator | 2026-03-09 00:47:25 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:47:28.344845 | orchestrator | 2026-03-09 00:47:28 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:47:28.345886 | orchestrator | 2026-03-09 00:47:28 | INFO  | Task ede5f393-d486-4556-9bd1-b5cd81caf3d1 is in state STARTED 2026-03-09 00:47:28.346002 | orchestrator | 2026-03-09 00:47:28 | INFO  | Task b17f1479-b83f-4a91-a270-198d81a2c237 is in state STARTED 2026-03-09 00:47:28.346094 | orchestrator | 2026-03-09 00:47:28 | INFO  | Task 8fa95c07-e0a0-4ffc-8349-3619badbd659 is in state STARTED 2026-03-09 00:47:28.346608 | orchestrator | 2026-03-09 00:47:28 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:47:28.347504 | orchestrator | 2026-03-09 00:47:28 | INFO  | Task 6e447993-960f-4006-ac76-298c42f6f912 is in state STARTED 2026-03-09 00:47:28.349266 | orchestrator | 2026-03-09 00:47:28 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:47:28.349336 | orchestrator | 2026-03-09 00:47:28 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:47:31.386972 | orchestrator | 2026-03-09 00:47:31 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:47:31.388286 | orchestrator | 2026-03-09 00:47:31 | INFO  | Task ede5f393-d486-4556-9bd1-b5cd81caf3d1 is in state STARTED 2026-03-09 00:47:31.389257 | orchestrator | 2026-03-09 00:47:31 | INFO  | Task b17f1479-b83f-4a91-a270-198d81a2c237 is in state STARTED 2026-03-09 00:47:31.391419 | orchestrator | 2026-03-09 00:47:31 | INFO  | Task 8fa95c07-e0a0-4ffc-8349-3619badbd659 is in state STARTED 2026-03-09 00:47:31.394177 | orchestrator | 2026-03-09 00:47:31 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:47:31.395028 | orchestrator | 2026-03-09 00:47:31 | INFO  | Task 6e447993-960f-4006-ac76-298c42f6f912 is in state STARTED 2026-03-09 00:47:31.396069 | orchestrator | 2026-03-09 00:47:31 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:47:31.396089 | orchestrator | 2026-03-09 00:47:31 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:47:34.455400 | orchestrator | 2026-03-09 00:47:34 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:47:34.455486 | orchestrator | 2026-03-09 00:47:34 | INFO  | Task ede5f393-d486-4556-9bd1-b5cd81caf3d1 is in state STARTED 2026-03-09 00:47:34.455494 | orchestrator | 2026-03-09 00:47:34 | INFO  | Task b17f1479-b83f-4a91-a270-198d81a2c237 is in state STARTED 2026-03-09 00:47:34.455498 | orchestrator | 2026-03-09 00:47:34 | INFO  | Task 8fa95c07-e0a0-4ffc-8349-3619badbd659 is in state STARTED 2026-03-09 00:47:34.455514 | orchestrator | 2026-03-09 00:47:34 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:47:34.455519 | orchestrator | 2026-03-09 00:47:34 | INFO  | Task 6e447993-960f-4006-ac76-298c42f6f912 is in state STARTED 2026-03-09 00:47:34.455522 | orchestrator | 2026-03-09 00:47:34 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:47:34.455527 | orchestrator | 2026-03-09 00:47:34 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:47:37.492200 | orchestrator | 2026-03-09 00:47:37 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:47:37.492371 | orchestrator | 2026-03-09 00:47:37 | INFO  | Task ede5f393-d486-4556-9bd1-b5cd81caf3d1 is in state STARTED 2026-03-09 00:47:37.492877 | orchestrator | 2026-03-09 00:47:37 | INFO  | Task b17f1479-b83f-4a91-a270-198d81a2c237 is in state STARTED 2026-03-09 00:47:37.493775 | orchestrator | 2026-03-09 00:47:37 | INFO  | Task 8fa95c07-e0a0-4ffc-8349-3619badbd659 is in state STARTED 2026-03-09 00:47:37.494175 | orchestrator | 2026-03-09 00:47:37 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:47:37.494806 | orchestrator | 2026-03-09 00:47:37 | INFO  | Task 6e447993-960f-4006-ac76-298c42f6f912 is in state STARTED 2026-03-09 00:47:37.495408 | orchestrator | 2026-03-09 00:47:37 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:47:37.495423 | orchestrator | 2026-03-09 00:47:37 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:47:40.822270 | orchestrator | 2026-03-09 00:47:40 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:47:40.822388 | orchestrator | 2026-03-09 00:47:40 | INFO  | Task ede5f393-d486-4556-9bd1-b5cd81caf3d1 is in state STARTED 2026-03-09 00:47:40.822401 | orchestrator | 2026-03-09 00:47:40 | INFO  | Task b17f1479-b83f-4a91-a270-198d81a2c237 is in state STARTED 2026-03-09 00:47:40.822410 | orchestrator | 2026-03-09 00:47:40 | INFO  | Task 8fa95c07-e0a0-4ffc-8349-3619badbd659 is in state STARTED 2026-03-09 00:47:40.822416 | orchestrator | 2026-03-09 00:47:40 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:47:40.822423 | orchestrator | 2026-03-09 00:47:40 | INFO  | Task 6e447993-960f-4006-ac76-298c42f6f912 is in state STARTED 2026-03-09 00:47:40.822530 | orchestrator | 2026-03-09 00:47:40 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:47:40.822539 | orchestrator | 2026-03-09 00:47:40 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:47:44.002529 | orchestrator | 2026-03-09 00:47:43 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:47:44.002598 | orchestrator | 2026-03-09 00:47:43 | INFO  | Task ede5f393-d486-4556-9bd1-b5cd81caf3d1 is in state STARTED 2026-03-09 00:47:44.023279 | orchestrator | 2026-03-09 00:47:44 | INFO  | Task b17f1479-b83f-4a91-a270-198d81a2c237 is in state STARTED 2026-03-09 00:47:44.058779 | orchestrator | 2026-03-09 00:47:44 | INFO  | Task 8fa95c07-e0a0-4ffc-8349-3619badbd659 is in state STARTED 2026-03-09 00:47:44.058869 | orchestrator | 2026-03-09 00:47:44 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:47:44.058884 | orchestrator | 2026-03-09 00:47:44 | INFO  | Task 6e447993-960f-4006-ac76-298c42f6f912 is in state STARTED 2026-03-09 00:47:44.058911 | orchestrator | 2026-03-09 00:47:44 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:47:44.058921 | orchestrator | 2026-03-09 00:47:44 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:47:47.254508 | orchestrator | 2026-03-09 00:47:47 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:47:47.259288 | orchestrator | 2026-03-09 00:47:47 | INFO  | Task ede5f393-d486-4556-9bd1-b5cd81caf3d1 is in state STARTED 2026-03-09 00:47:47.264793 | orchestrator | 2026-03-09 00:47:47 | INFO  | Task b17f1479-b83f-4a91-a270-198d81a2c237 is in state STARTED 2026-03-09 00:47:47.269178 | orchestrator | 2026-03-09 00:47:47 | INFO  | Task 8fa95c07-e0a0-4ffc-8349-3619badbd659 is in state STARTED 2026-03-09 00:47:47.276559 | orchestrator | 2026-03-09 00:47:47 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:47:47.280895 | orchestrator | 2026-03-09 00:47:47 | INFO  | Task 6e447993-960f-4006-ac76-298c42f6f912 is in state STARTED 2026-03-09 00:47:47.283084 | orchestrator | 2026-03-09 00:47:47 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:47:47.283131 | orchestrator | 2026-03-09 00:47:47 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:47:50.554851 | orchestrator | 2026-03-09 00:47:50 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:47:50.555112 | orchestrator | 2026-03-09 00:47:50 | INFO  | Task ede5f393-d486-4556-9bd1-b5cd81caf3d1 is in state STARTED 2026-03-09 00:47:50.557009 | orchestrator | 2026-03-09 00:47:50 | INFO  | Task b17f1479-b83f-4a91-a270-198d81a2c237 is in state STARTED 2026-03-09 00:47:50.558226 | orchestrator | 2026-03-09 00:47:50 | INFO  | Task 8fa95c07-e0a0-4ffc-8349-3619badbd659 is in state STARTED 2026-03-09 00:47:50.559742 | orchestrator | 2026-03-09 00:47:50 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:47:50.562585 | orchestrator | 2026-03-09 00:47:50 | INFO  | Task 6e447993-960f-4006-ac76-298c42f6f912 is in state STARTED 2026-03-09 00:47:50.562624 | orchestrator | 2026-03-09 00:47:50 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:47:50.562630 | orchestrator | 2026-03-09 00:47:50 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:47:53.654535 | orchestrator | 2026-03-09 00:47:53 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:47:53.654624 | orchestrator | 2026-03-09 00:47:53 | INFO  | Task ede5f393-d486-4556-9bd1-b5cd81caf3d1 is in state STARTED 2026-03-09 00:47:53.654634 | orchestrator | 2026-03-09 00:47:53 | INFO  | Task b17f1479-b83f-4a91-a270-198d81a2c237 is in state STARTED 2026-03-09 00:47:53.654931 | orchestrator | 2026-03-09 00:47:53 | INFO  | Task afe1cec7-d6bc-4042-9b25-649a6497321c is in state STARTED 2026-03-09 00:47:53.661762 | orchestrator | 2026-03-09 00:47:53 | INFO  | Task 8fa95c07-e0a0-4ffc-8349-3619badbd659 is in state STARTED 2026-03-09 00:47:53.661824 | orchestrator | 2026-03-09 00:47:53 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:47:53.661832 | orchestrator | 2026-03-09 00:47:53 | INFO  | Task 6e447993-960f-4006-ac76-298c42f6f912 is in state SUCCESS 2026-03-09 00:47:53.662523 | orchestrator | 2026-03-09 00:47:53.662551 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-03-09 00:47:53.662556 | orchestrator | 2026-03-09 00:47:53.662561 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-03-09 00:47:53.662566 | orchestrator | Monday 09 March 2026 00:47:30 +0000 (0:00:00.764) 0:00:00.764 ********** 2026-03-09 00:47:53.662570 | orchestrator | changed: [testbed-manager] 2026-03-09 00:47:53.662575 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:47:53.662579 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:47:53.662583 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:47:53.662602 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:47:53.662606 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:47:53.662610 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:47:53.662613 | orchestrator | 2026-03-09 00:47:53.662617 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-03-09 00:47:53.662621 | orchestrator | Monday 09 March 2026 00:47:35 +0000 (0:00:05.217) 0:00:05.981 ********** 2026-03-09 00:47:53.662626 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-09 00:47:53.662630 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-09 00:47:53.662634 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-09 00:47:53.662637 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-09 00:47:53.662641 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-09 00:47:53.662645 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-09 00:47:53.662649 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-09 00:47:53.662653 | orchestrator | 2026-03-09 00:47:53.662657 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-03-09 00:47:53.662661 | orchestrator | Monday 09 March 2026 00:47:38 +0000 (0:00:02.506) 0:00:08.487 ********** 2026-03-09 00:47:53.662671 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-09 00:47:36.774143', 'end': '2026-03-09 00:47:36.782378', 'delta': '0:00:00.008235', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-09 00:47:53.662678 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-09 00:47:36.739221', 'end': '2026-03-09 00:47:36.749800', 'delta': '0:00:00.010579', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-09 00:47:53.662683 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-09 00:47:36.901678', 'end': '2026-03-09 00:47:36.905895', 'delta': '0:00:00.004217', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-09 00:47:53.662701 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-09 00:47:36.735482', 'end': '2026-03-09 00:47:36.745229', 'delta': '0:00:00.009747', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-09 00:47:53.662714 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-09 00:47:36.852924', 'end': '2026-03-09 00:47:36.862282', 'delta': '0:00:00.009358', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-09 00:47:53.662721 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-09 00:47:37.570019', 'end': '2026-03-09 00:47:37.580037', 'delta': '0:00:00.010018', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-09 00:47:53.662727 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-09 00:47:37.905997', 'end': '2026-03-09 00:47:37.913375', 'delta': '0:00:00.007378', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-09 00:47:53.662732 | orchestrator | 2026-03-09 00:47:53.662738 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-03-09 00:47:53.662744 | orchestrator | Monday 09 March 2026 00:47:41 +0000 (0:00:02.901) 0:00:11.388 ********** 2026-03-09 00:47:53.662750 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-09 00:47:53.662756 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-09 00:47:53.662762 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-09 00:47:53.662767 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-09 00:47:53.662774 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-09 00:47:53.662780 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-09 00:47:53.662786 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-09 00:47:53.662792 | orchestrator | 2026-03-09 00:47:53.662799 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-03-09 00:47:53.662807 | orchestrator | Monday 09 March 2026 00:47:46 +0000 (0:00:05.042) 0:00:16.431 ********** 2026-03-09 00:47:53.662811 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-03-09 00:47:53.662815 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-03-09 00:47:53.662819 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-03-09 00:47:53.662822 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-03-09 00:47:53.662826 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-03-09 00:47:53.662830 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-03-09 00:47:53.662833 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-03-09 00:47:53.662837 | orchestrator | 2026-03-09 00:47:53.662841 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:47:53.662849 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:47:53.662855 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:47:53.662858 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:47:53.662862 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:47:53.662866 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:47:53.662935 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:47:53.662939 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:47:53.662943 | orchestrator | 2026-03-09 00:47:53.662947 | orchestrator | 2026-03-09 00:47:53.662952 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:47:53.662958 | orchestrator | Monday 09 March 2026 00:47:50 +0000 (0:00:04.269) 0:00:20.701 ********** 2026-03-09 00:47:53.662969 | orchestrator | =============================================================================== 2026-03-09 00:47:53.662978 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 5.22s 2026-03-09 00:47:53.662984 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 5.04s 2026-03-09 00:47:53.662990 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 4.27s 2026-03-09 00:47:53.662996 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.90s 2026-03-09 00:47:53.663002 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.51s 2026-03-09 00:47:53.663011 | orchestrator | 2026-03-09 00:47:53 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:47:53.663188 | orchestrator | 2026-03-09 00:47:53 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:47:56.789296 | orchestrator | 2026-03-09 00:47:56 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:47:56.789353 | orchestrator | 2026-03-09 00:47:56 | INFO  | Task ede5f393-d486-4556-9bd1-b5cd81caf3d1 is in state STARTED 2026-03-09 00:47:56.789361 | orchestrator | 2026-03-09 00:47:56 | INFO  | Task b17f1479-b83f-4a91-a270-198d81a2c237 is in state STARTED 2026-03-09 00:47:56.789367 | orchestrator | 2026-03-09 00:47:56 | INFO  | Task afe1cec7-d6bc-4042-9b25-649a6497321c is in state STARTED 2026-03-09 00:47:56.789372 | orchestrator | 2026-03-09 00:47:56 | INFO  | Task 8fa95c07-e0a0-4ffc-8349-3619badbd659 is in state STARTED 2026-03-09 00:47:56.789394 | orchestrator | 2026-03-09 00:47:56 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:47:56.789400 | orchestrator | 2026-03-09 00:47:56 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:47:56.789406 | orchestrator | 2026-03-09 00:47:56 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:47:59.888149 | orchestrator | 2026-03-09 00:47:59 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:47:59.888996 | orchestrator | 2026-03-09 00:47:59 | INFO  | Task ede5f393-d486-4556-9bd1-b5cd81caf3d1 is in state STARTED 2026-03-09 00:47:59.890906 | orchestrator | 2026-03-09 00:47:59 | INFO  | Task b17f1479-b83f-4a91-a270-198d81a2c237 is in state STARTED 2026-03-09 00:47:59.892000 | orchestrator | 2026-03-09 00:47:59 | INFO  | Task afe1cec7-d6bc-4042-9b25-649a6497321c is in state STARTED 2026-03-09 00:47:59.899865 | orchestrator | 2026-03-09 00:47:59 | INFO  | Task 8fa95c07-e0a0-4ffc-8349-3619badbd659 is in state STARTED 2026-03-09 00:47:59.899940 | orchestrator | 2026-03-09 00:47:59 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:47:59.899949 | orchestrator | 2026-03-09 00:47:59 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:47:59.899957 | orchestrator | 2026-03-09 00:47:59 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:03.081946 | orchestrator | 2026-03-09 00:48:03 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:48:03.082063 | orchestrator | 2026-03-09 00:48:03 | INFO  | Task ede5f393-d486-4556-9bd1-b5cd81caf3d1 is in state STARTED 2026-03-09 00:48:03.082074 | orchestrator | 2026-03-09 00:48:03 | INFO  | Task b17f1479-b83f-4a91-a270-198d81a2c237 is in state STARTED 2026-03-09 00:48:03.082082 | orchestrator | 2026-03-09 00:48:03 | INFO  | Task afe1cec7-d6bc-4042-9b25-649a6497321c is in state STARTED 2026-03-09 00:48:03.082090 | orchestrator | 2026-03-09 00:48:03 | INFO  | Task 8fa95c07-e0a0-4ffc-8349-3619badbd659 is in state STARTED 2026-03-09 00:48:03.082097 | orchestrator | 2026-03-09 00:48:03 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:48:03.082104 | orchestrator | 2026-03-09 00:48:03 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:48:03.082112 | orchestrator | 2026-03-09 00:48:03 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:06.153216 | orchestrator | 2026-03-09 00:48:06 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:48:06.153291 | orchestrator | 2026-03-09 00:48:06 | INFO  | Task ede5f393-d486-4556-9bd1-b5cd81caf3d1 is in state STARTED 2026-03-09 00:48:06.154161 | orchestrator | 2026-03-09 00:48:06 | INFO  | Task b17f1479-b83f-4a91-a270-198d81a2c237 is in state STARTED 2026-03-09 00:48:06.154212 | orchestrator | 2026-03-09 00:48:06 | INFO  | Task afe1cec7-d6bc-4042-9b25-649a6497321c is in state STARTED 2026-03-09 00:48:06.157043 | orchestrator | 2026-03-09 00:48:06 | INFO  | Task 8fa95c07-e0a0-4ffc-8349-3619badbd659 is in state STARTED 2026-03-09 00:48:06.159893 | orchestrator | 2026-03-09 00:48:06 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:48:06.159932 | orchestrator | 2026-03-09 00:48:06 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:48:06.159937 | orchestrator | 2026-03-09 00:48:06 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:09.286257 | orchestrator | 2026-03-09 00:48:09 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:48:09.286373 | orchestrator | 2026-03-09 00:48:09 | INFO  | Task ede5f393-d486-4556-9bd1-b5cd81caf3d1 is in state STARTED 2026-03-09 00:48:09.286384 | orchestrator | 2026-03-09 00:48:09 | INFO  | Task b17f1479-b83f-4a91-a270-198d81a2c237 is in state STARTED 2026-03-09 00:48:09.286390 | orchestrator | 2026-03-09 00:48:09 | INFO  | Task afe1cec7-d6bc-4042-9b25-649a6497321c is in state STARTED 2026-03-09 00:48:09.286396 | orchestrator | 2026-03-09 00:48:09 | INFO  | Task 8fa95c07-e0a0-4ffc-8349-3619badbd659 is in state STARTED 2026-03-09 00:48:09.286402 | orchestrator | 2026-03-09 00:48:09 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:48:09.286408 | orchestrator | 2026-03-09 00:48:09 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:48:09.286414 | orchestrator | 2026-03-09 00:48:09 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:12.389563 | orchestrator | 2026-03-09 00:48:12 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:48:12.389700 | orchestrator | 2026-03-09 00:48:12 | INFO  | Task ede5f393-d486-4556-9bd1-b5cd81caf3d1 is in state STARTED 2026-03-09 00:48:12.389726 | orchestrator | 2026-03-09 00:48:12 | INFO  | Task b17f1479-b83f-4a91-a270-198d81a2c237 is in state STARTED 2026-03-09 00:48:12.391134 | orchestrator | 2026-03-09 00:48:12 | INFO  | Task afe1cec7-d6bc-4042-9b25-649a6497321c is in state STARTED 2026-03-09 00:48:12.394210 | orchestrator | 2026-03-09 00:48:12 | INFO  | Task 8fa95c07-e0a0-4ffc-8349-3619badbd659 is in state STARTED 2026-03-09 00:48:12.408163 | orchestrator | 2026-03-09 00:48:12 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:48:12.408276 | orchestrator | 2026-03-09 00:48:12 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:48:12.408301 | orchestrator | 2026-03-09 00:48:12 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:15.710268 | orchestrator | 2026-03-09 00:48:15 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:48:15.710382 | orchestrator | 2026-03-09 00:48:15 | INFO  | Task ede5f393-d486-4556-9bd1-b5cd81caf3d1 is in state STARTED 2026-03-09 00:48:15.755964 | orchestrator | 2026-03-09 00:48:15 | INFO  | Task b17f1479-b83f-4a91-a270-198d81a2c237 is in state SUCCESS 2026-03-09 00:48:15.756056 | orchestrator | 2026-03-09 00:48:15 | INFO  | Task afe1cec7-d6bc-4042-9b25-649a6497321c is in state STARTED 2026-03-09 00:48:15.756068 | orchestrator | 2026-03-09 00:48:15 | INFO  | Task 8fa95c07-e0a0-4ffc-8349-3619badbd659 is in state STARTED 2026-03-09 00:48:15.756370 | orchestrator | 2026-03-09 00:48:15 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:48:15.756383 | orchestrator | 2026-03-09 00:48:15 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:48:15.756391 | orchestrator | 2026-03-09 00:48:15 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:18.789094 | orchestrator | 2026-03-09 00:48:18 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:48:18.797006 | orchestrator | 2026-03-09 00:48:18 | INFO  | Task ede5f393-d486-4556-9bd1-b5cd81caf3d1 is in state STARTED 2026-03-09 00:48:18.798066 | orchestrator | 2026-03-09 00:48:18 | INFO  | Task afe1cec7-d6bc-4042-9b25-649a6497321c is in state STARTED 2026-03-09 00:48:18.800207 | orchestrator | 2026-03-09 00:48:18 | INFO  | Task 8fa95c07-e0a0-4ffc-8349-3619badbd659 is in state STARTED 2026-03-09 00:48:18.804507 | orchestrator | 2026-03-09 00:48:18 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:48:18.809071 | orchestrator | 2026-03-09 00:48:18 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:48:18.809171 | orchestrator | 2026-03-09 00:48:18 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:22.027124 | orchestrator | 2026-03-09 00:48:21 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:48:22.027224 | orchestrator | 2026-03-09 00:48:21 | INFO  | Task ede5f393-d486-4556-9bd1-b5cd81caf3d1 is in state STARTED 2026-03-09 00:48:22.027242 | orchestrator | 2026-03-09 00:48:21 | INFO  | Task afe1cec7-d6bc-4042-9b25-649a6497321c is in state STARTED 2026-03-09 00:48:22.027257 | orchestrator | 2026-03-09 00:48:21 | INFO  | Task 8fa95c07-e0a0-4ffc-8349-3619badbd659 is in state STARTED 2026-03-09 00:48:22.027270 | orchestrator | 2026-03-09 00:48:21 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:48:22.027284 | orchestrator | 2026-03-09 00:48:21 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:48:22.027298 | orchestrator | 2026-03-09 00:48:21 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:25.387615 | orchestrator | 2026-03-09 00:48:25 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:48:25.387712 | orchestrator | 2026-03-09 00:48:25 | INFO  | Task ede5f393-d486-4556-9bd1-b5cd81caf3d1 is in state STARTED 2026-03-09 00:48:25.387727 | orchestrator | 2026-03-09 00:48:25 | INFO  | Task afe1cec7-d6bc-4042-9b25-649a6497321c is in state STARTED 2026-03-09 00:48:25.387742 | orchestrator | 2026-03-09 00:48:25 | INFO  | Task 8fa95c07-e0a0-4ffc-8349-3619badbd659 is in state STARTED 2026-03-09 00:48:25.387758 | orchestrator | 2026-03-09 00:48:25 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:48:25.387780 | orchestrator | 2026-03-09 00:48:25 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:48:25.387814 | orchestrator | 2026-03-09 00:48:25 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:28.125159 | orchestrator | 2026-03-09 00:48:28 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:48:28.125269 | orchestrator | 2026-03-09 00:48:28 | INFO  | Task ede5f393-d486-4556-9bd1-b5cd81caf3d1 is in state SUCCESS 2026-03-09 00:48:28.126719 | orchestrator | 2026-03-09 00:48:28 | INFO  | Task afe1cec7-d6bc-4042-9b25-649a6497321c is in state STARTED 2026-03-09 00:48:28.129082 | orchestrator | 2026-03-09 00:48:28 | INFO  | Task 8fa95c07-e0a0-4ffc-8349-3619badbd659 is in state STARTED 2026-03-09 00:48:28.131237 | orchestrator | 2026-03-09 00:48:28 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:48:28.132817 | orchestrator | 2026-03-09 00:48:28 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:48:28.132849 | orchestrator | 2026-03-09 00:48:28 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:31.190247 | orchestrator | 2026-03-09 00:48:31 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:48:31.190354 | orchestrator | 2026-03-09 00:48:31 | INFO  | Task afe1cec7-d6bc-4042-9b25-649a6497321c is in state STARTED 2026-03-09 00:48:31.190369 | orchestrator | 2026-03-09 00:48:31 | INFO  | Task 8fa95c07-e0a0-4ffc-8349-3619badbd659 is in state STARTED 2026-03-09 00:48:31.191025 | orchestrator | 2026-03-09 00:48:31 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:48:31.192138 | orchestrator | 2026-03-09 00:48:31 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:48:31.192222 | orchestrator | 2026-03-09 00:48:31 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:34.307993 | orchestrator | 2026-03-09 00:48:34 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:48:34.308086 | orchestrator | 2026-03-09 00:48:34 | INFO  | Task afe1cec7-d6bc-4042-9b25-649a6497321c is in state STARTED 2026-03-09 00:48:34.308097 | orchestrator | 2026-03-09 00:48:34 | INFO  | Task 8fa95c07-e0a0-4ffc-8349-3619badbd659 is in state STARTED 2026-03-09 00:48:34.309520 | orchestrator | 2026-03-09 00:48:34 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:48:34.311742 | orchestrator | 2026-03-09 00:48:34 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:48:34.311794 | orchestrator | 2026-03-09 00:48:34 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:37.358774 | orchestrator | 2026-03-09 00:48:37 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:48:37.359959 | orchestrator | 2026-03-09 00:48:37 | INFO  | Task afe1cec7-d6bc-4042-9b25-649a6497321c is in state STARTED 2026-03-09 00:48:37.363190 | orchestrator | 2026-03-09 00:48:37 | INFO  | Task 8fa95c07-e0a0-4ffc-8349-3619badbd659 is in state STARTED 2026-03-09 00:48:37.363242 | orchestrator | 2026-03-09 00:48:37 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:48:37.363600 | orchestrator | 2026-03-09 00:48:37 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:48:37.363627 | orchestrator | 2026-03-09 00:48:37 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:40.418012 | orchestrator | 2026-03-09 00:48:40 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:48:40.418138 | orchestrator | 2026-03-09 00:48:40 | INFO  | Task afe1cec7-d6bc-4042-9b25-649a6497321c is in state STARTED 2026-03-09 00:48:40.420634 | orchestrator | 2026-03-09 00:48:40 | INFO  | Task 8fa95c07-e0a0-4ffc-8349-3619badbd659 is in state STARTED 2026-03-09 00:48:40.423068 | orchestrator | 2026-03-09 00:48:40 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:48:40.423561 | orchestrator | 2026-03-09 00:48:40 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:48:40.423584 | orchestrator | 2026-03-09 00:48:40 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:43.475280 | orchestrator | 2026-03-09 00:48:43 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:48:43.478753 | orchestrator | 2026-03-09 00:48:43 | INFO  | Task afe1cec7-d6bc-4042-9b25-649a6497321c is in state STARTED 2026-03-09 00:48:43.479596 | orchestrator | 2026-03-09 00:48:43 | INFO  | Task 8fa95c07-e0a0-4ffc-8349-3619badbd659 is in state STARTED 2026-03-09 00:48:43.481675 | orchestrator | 2026-03-09 00:48:43 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:48:43.482498 | orchestrator | 2026-03-09 00:48:43 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:48:43.483374 | orchestrator | 2026-03-09 00:48:43 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:46.556085 | orchestrator | 2026-03-09 00:48:46 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:48:46.556187 | orchestrator | 2026-03-09 00:48:46 | INFO  | Task afe1cec7-d6bc-4042-9b25-649a6497321c is in state STARTED 2026-03-09 00:48:46.556614 | orchestrator | 2026-03-09 00:48:46 | INFO  | Task 8fa95c07-e0a0-4ffc-8349-3619badbd659 is in state STARTED 2026-03-09 00:48:46.557822 | orchestrator | 2026-03-09 00:48:46 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:48:46.559691 | orchestrator | 2026-03-09 00:48:46 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:48:46.561318 | orchestrator | 2026-03-09 00:48:46 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:49.601817 | orchestrator | 2026-03-09 00:48:49 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:48:49.603643 | orchestrator | 2026-03-09 00:48:49 | INFO  | Task afe1cec7-d6bc-4042-9b25-649a6497321c is in state STARTED 2026-03-09 00:48:49.605585 | orchestrator | 2026-03-09 00:48:49 | INFO  | Task 8fa95c07-e0a0-4ffc-8349-3619badbd659 is in state STARTED 2026-03-09 00:48:49.607524 | orchestrator | 2026-03-09 00:48:49 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:48:49.608979 | orchestrator | 2026-03-09 00:48:49 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:48:49.609052 | orchestrator | 2026-03-09 00:48:49 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:52.660858 | orchestrator | 2026-03-09 00:48:52 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:48:52.661382 | orchestrator | 2026-03-09 00:48:52 | INFO  | Task afe1cec7-d6bc-4042-9b25-649a6497321c is in state STARTED 2026-03-09 00:48:52.662985 | orchestrator | 2026-03-09 00:48:52 | INFO  | Task 8fa95c07-e0a0-4ffc-8349-3619badbd659 is in state STARTED 2026-03-09 00:48:52.663665 | orchestrator | 2026-03-09 00:48:52 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:48:52.664505 | orchestrator | 2026-03-09 00:48:52 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:48:52.664546 | orchestrator | 2026-03-09 00:48:52 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:55.699023 | orchestrator | 2026-03-09 00:48:55 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:48:55.699750 | orchestrator | 2026-03-09 00:48:55 | INFO  | Task afe1cec7-d6bc-4042-9b25-649a6497321c is in state STARTED 2026-03-09 00:48:55.700186 | orchestrator | 2026-03-09 00:48:55 | INFO  | Task 8fa95c07-e0a0-4ffc-8349-3619badbd659 is in state STARTED 2026-03-09 00:48:55.701032 | orchestrator | 2026-03-09 00:48:55 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:48:55.701807 | orchestrator | 2026-03-09 00:48:55 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:48:55.702624 | orchestrator | 2026-03-09 00:48:55 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:58.785088 | orchestrator | 2026-03-09 00:48:58 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:48:58.793376 | orchestrator | 2026-03-09 00:48:58 | INFO  | Task afe1cec7-d6bc-4042-9b25-649a6497321c is in state STARTED 2026-03-09 00:48:58.804207 | orchestrator | 2026-03-09 00:48:58 | INFO  | Task 8fa95c07-e0a0-4ffc-8349-3619badbd659 is in state STARTED 2026-03-09 00:48:58.809885 | orchestrator | 2026-03-09 00:48:58 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:48:58.811769 | orchestrator | 2026-03-09 00:48:58 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:48:58.812858 | orchestrator | 2026-03-09 00:48:58 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:01.879611 | orchestrator | 2026-03-09 00:49:01 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:49:01.891274 | orchestrator | 2026-03-09 00:49:01 | INFO  | Task afe1cec7-d6bc-4042-9b25-649a6497321c is in state STARTED 2026-03-09 00:49:01.896920 | orchestrator | 2026-03-09 00:49:01 | INFO  | Task 8fa95c07-e0a0-4ffc-8349-3619badbd659 is in state STARTED 2026-03-09 00:49:01.899987 | orchestrator | 2026-03-09 00:49:01 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:49:01.901982 | orchestrator | 2026-03-09 00:49:01 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:49:01.902910 | orchestrator | 2026-03-09 00:49:01 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:05.072009 | orchestrator | 2026-03-09 00:49:05 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:49:05.072510 | orchestrator | 2026-03-09 00:49:05 | INFO  | Task afe1cec7-d6bc-4042-9b25-649a6497321c is in state STARTED 2026-03-09 00:49:05.073063 | orchestrator | 2026-03-09 00:49:05 | INFO  | Task 8fa95c07-e0a0-4ffc-8349-3619badbd659 is in state STARTED 2026-03-09 00:49:05.075222 | orchestrator | 2026-03-09 00:49:05 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:49:05.077206 | orchestrator | 2026-03-09 00:49:05 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:49:05.077332 | orchestrator | 2026-03-09 00:49:05 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:08.140712 | orchestrator | 2026-03-09 00:49:08 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:49:08.150879 | orchestrator | 2026-03-09 00:49:08 | INFO  | Task afe1cec7-d6bc-4042-9b25-649a6497321c is in state STARTED 2026-03-09 00:49:08.150976 | orchestrator | 2026-03-09 00:49:08 | INFO  | Task 8fa95c07-e0a0-4ffc-8349-3619badbd659 is in state STARTED 2026-03-09 00:49:08.150992 | orchestrator | 2026-03-09 00:49:08 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:49:08.151609 | orchestrator | 2026-03-09 00:49:08 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:49:08.151645 | orchestrator | 2026-03-09 00:49:08 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:11.198697 | orchestrator | 2026-03-09 00:49:11 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:49:11.198846 | orchestrator | 2026-03-09 00:49:11 | INFO  | Task afe1cec7-d6bc-4042-9b25-649a6497321c is in state STARTED 2026-03-09 00:49:11.200421 | orchestrator | 2026-03-09 00:49:11 | INFO  | Task 8fa95c07-e0a0-4ffc-8349-3619badbd659 is in state STARTED 2026-03-09 00:49:11.201963 | orchestrator | 2026-03-09 00:49:11 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:49:11.204733 | orchestrator | 2026-03-09 00:49:11 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:49:11.204777 | orchestrator | 2026-03-09 00:49:11 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:14.247279 | orchestrator | 2026-03-09 00:49:14 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:49:14.248795 | orchestrator | 2026-03-09 00:49:14 | INFO  | Task afe1cec7-d6bc-4042-9b25-649a6497321c is in state SUCCESS 2026-03-09 00:49:14.250268 | orchestrator | 2026-03-09 00:49:14.250324 | orchestrator | 2026-03-09 00:49:14.250337 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-03-09 00:49:14.250349 | orchestrator | 2026-03-09 00:49:14.250361 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-03-09 00:49:14.250404 | orchestrator | Monday 09 March 2026 00:47:30 +0000 (0:00:00.326) 0:00:00.326 ********** 2026-03-09 00:49:14.250423 | orchestrator | ok: [testbed-manager] => { 2026-03-09 00:49:14.250460 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-03-09 00:49:14.250473 | orchestrator | } 2026-03-09 00:49:14.250484 | orchestrator | 2026-03-09 00:49:14.250495 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-03-09 00:49:14.250505 | orchestrator | Monday 09 March 2026 00:47:31 +0000 (0:00:00.683) 0:00:01.010 ********** 2026-03-09 00:49:14.250516 | orchestrator | ok: [testbed-manager] 2026-03-09 00:49:14.250527 | orchestrator | 2026-03-09 00:49:14.250538 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-03-09 00:49:14.250549 | orchestrator | Monday 09 March 2026 00:47:33 +0000 (0:00:02.466) 0:00:03.476 ********** 2026-03-09 00:49:14.250559 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-03-09 00:49:14.250570 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-03-09 00:49:14.250581 | orchestrator | 2026-03-09 00:49:14.250622 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-03-09 00:49:14.250633 | orchestrator | Monday 09 March 2026 00:47:34 +0000 (0:00:01.219) 0:00:04.696 ********** 2026-03-09 00:49:14.250643 | orchestrator | changed: [testbed-manager] 2026-03-09 00:49:14.250654 | orchestrator | 2026-03-09 00:49:14.250665 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-03-09 00:49:14.250676 | orchestrator | Monday 09 March 2026 00:47:38 +0000 (0:00:03.290) 0:00:07.986 ********** 2026-03-09 00:49:14.250689 | orchestrator | changed: [testbed-manager] 2026-03-09 00:49:14.250707 | orchestrator | 2026-03-09 00:49:14.250723 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-03-09 00:49:14.250748 | orchestrator | Monday 09 March 2026 00:47:40 +0000 (0:00:02.627) 0:00:10.613 ********** 2026-03-09 00:49:14.250766 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-03-09 00:49:14.250784 | orchestrator | ok: [testbed-manager] 2026-03-09 00:49:14.250802 | orchestrator | 2026-03-09 00:49:14.250822 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-03-09 00:49:14.250842 | orchestrator | Monday 09 March 2026 00:48:09 +0000 (0:00:29.141) 0:00:39.755 ********** 2026-03-09 00:49:14.250861 | orchestrator | changed: [testbed-manager] 2026-03-09 00:49:14.250881 | orchestrator | 2026-03-09 00:49:14.250900 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:49:14.250921 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:49:14.250943 | orchestrator | 2026-03-09 00:49:14.250962 | orchestrator | 2026-03-09 00:49:14.250982 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:49:14.251002 | orchestrator | Monday 09 March 2026 00:48:14 +0000 (0:00:04.300) 0:00:44.055 ********** 2026-03-09 00:49:14.251021 | orchestrator | =============================================================================== 2026-03-09 00:49:14.251041 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 29.14s 2026-03-09 00:49:14.251061 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 4.30s 2026-03-09 00:49:14.251081 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 3.29s 2026-03-09 00:49:14.251100 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.63s 2026-03-09 00:49:14.251120 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.47s 2026-03-09 00:49:14.251140 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.22s 2026-03-09 00:49:14.251160 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.68s 2026-03-09 00:49:14.251179 | orchestrator | 2026-03-09 00:49:14.251199 | orchestrator | 2026-03-09 00:49:14.251220 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-09 00:49:14.251240 | orchestrator | 2026-03-09 00:49:14.251260 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-09 00:49:14.251292 | orchestrator | Monday 09 March 2026 00:47:32 +0000 (0:00:00.940) 0:00:00.940 ********** 2026-03-09 00:49:14.251313 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-09 00:49:14.251334 | orchestrator | 2026-03-09 00:49:14.251353 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-09 00:49:14.251454 | orchestrator | Monday 09 March 2026 00:47:33 +0000 (0:00:01.028) 0:00:01.968 ********** 2026-03-09 00:49:14.251477 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-09 00:49:14.251496 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-09 00:49:14.251516 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-09 00:49:14.251537 | orchestrator | 2026-03-09 00:49:14.251557 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-09 00:49:14.251576 | orchestrator | Monday 09 March 2026 00:47:35 +0000 (0:00:01.772) 0:00:03.741 ********** 2026-03-09 00:49:14.251596 | orchestrator | changed: [testbed-manager] 2026-03-09 00:49:14.251616 | orchestrator | 2026-03-09 00:49:14.251635 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-09 00:49:14.251655 | orchestrator | Monday 09 March 2026 00:47:38 +0000 (0:00:03.278) 0:00:07.020 ********** 2026-03-09 00:49:14.251694 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-03-09 00:49:14.251715 | orchestrator | ok: [testbed-manager] 2026-03-09 00:49:14.251735 | orchestrator | 2026-03-09 00:49:14.251755 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-09 00:49:14.251774 | orchestrator | Monday 09 March 2026 00:48:13 +0000 (0:00:35.138) 0:00:42.158 ********** 2026-03-09 00:49:14.251794 | orchestrator | changed: [testbed-manager] 2026-03-09 00:49:14.251814 | orchestrator | 2026-03-09 00:49:14.251834 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-09 00:49:14.251854 | orchestrator | Monday 09 March 2026 00:48:15 +0000 (0:00:01.621) 0:00:43.779 ********** 2026-03-09 00:49:14.251873 | orchestrator | ok: [testbed-manager] 2026-03-09 00:49:14.251892 | orchestrator | 2026-03-09 00:49:14.251911 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-09 00:49:14.251928 | orchestrator | Monday 09 March 2026 00:48:17 +0000 (0:00:01.692) 0:00:45.472 ********** 2026-03-09 00:49:14.251945 | orchestrator | changed: [testbed-manager] 2026-03-09 00:49:14.251963 | orchestrator | 2026-03-09 00:49:14.251982 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-09 00:49:14.252002 | orchestrator | Monday 09 March 2026 00:48:20 +0000 (0:00:02.875) 0:00:48.347 ********** 2026-03-09 00:49:14.252014 | orchestrator | changed: [testbed-manager] 2026-03-09 00:49:14.252025 | orchestrator | 2026-03-09 00:49:14.252035 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-09 00:49:14.252045 | orchestrator | Monday 09 March 2026 00:48:21 +0000 (0:00:01.594) 0:00:49.942 ********** 2026-03-09 00:49:14.252054 | orchestrator | changed: [testbed-manager] 2026-03-09 00:49:14.252066 | orchestrator | 2026-03-09 00:49:14.252081 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-09 00:49:14.252097 | orchestrator | Monday 09 March 2026 00:48:23 +0000 (0:00:02.000) 0:00:51.942 ********** 2026-03-09 00:49:14.252112 | orchestrator | ok: [testbed-manager] 2026-03-09 00:49:14.252127 | orchestrator | 2026-03-09 00:49:14.252141 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:49:14.252164 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:49:14.252186 | orchestrator | 2026-03-09 00:49:14.252204 | orchestrator | 2026-03-09 00:49:14.252219 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:49:14.252235 | orchestrator | Monday 09 March 2026 00:48:25 +0000 (0:00:01.950) 0:00:53.893 ********** 2026-03-09 00:49:14.252265 | orchestrator | =============================================================================== 2026-03-09 00:49:14.252282 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 35.14s 2026-03-09 00:49:14.252314 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 3.28s 2026-03-09 00:49:14.252331 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.88s 2026-03-09 00:49:14.252342 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 2.00s 2026-03-09 00:49:14.252409 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 1.95s 2026-03-09 00:49:14.252421 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.77s 2026-03-09 00:49:14.252431 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.69s 2026-03-09 00:49:14.252440 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.62s 2026-03-09 00:49:14.252450 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.60s 2026-03-09 00:49:14.252460 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 1.03s 2026-03-09 00:49:14.252469 | orchestrator | 2026-03-09 00:49:14.252479 | orchestrator | 2026-03-09 00:49:14.252488 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-03-09 00:49:14.252498 | orchestrator | 2026-03-09 00:49:14.252507 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-03-09 00:49:14.252517 | orchestrator | Monday 09 March 2026 00:47:58 +0000 (0:00:00.339) 0:00:00.339 ********** 2026-03-09 00:49:14.252526 | orchestrator | ok: [testbed-manager] 2026-03-09 00:49:14.252536 | orchestrator | 2026-03-09 00:49:14.252545 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-03-09 00:49:14.252555 | orchestrator | Monday 09 March 2026 00:48:00 +0000 (0:00:01.427) 0:00:01.766 ********** 2026-03-09 00:49:14.252564 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-03-09 00:49:14.252574 | orchestrator | 2026-03-09 00:49:14.252584 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-03-09 00:49:14.252593 | orchestrator | Monday 09 March 2026 00:48:01 +0000 (0:00:01.059) 0:00:02.826 ********** 2026-03-09 00:49:14.252602 | orchestrator | changed: [testbed-manager] 2026-03-09 00:49:14.252612 | orchestrator | 2026-03-09 00:49:14.252621 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-03-09 00:49:14.252631 | orchestrator | Monday 09 March 2026 00:48:03 +0000 (0:00:02.339) 0:00:05.166 ********** 2026-03-09 00:49:14.252640 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-03-09 00:49:14.252650 | orchestrator | ok: [testbed-manager] 2026-03-09 00:49:14.252659 | orchestrator | 2026-03-09 00:49:14.252669 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-03-09 00:49:14.252678 | orchestrator | Monday 09 March 2026 00:49:03 +0000 (0:01:00.133) 0:01:05.299 ********** 2026-03-09 00:49:14.252687 | orchestrator | changed: [testbed-manager] 2026-03-09 00:49:14.252697 | orchestrator | 2026-03-09 00:49:14.252706 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:49:14.252716 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:49:14.252726 | orchestrator | 2026-03-09 00:49:14.252735 | orchestrator | 2026-03-09 00:49:14.252744 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:49:14.252765 | orchestrator | Monday 09 March 2026 00:49:11 +0000 (0:00:07.883) 0:01:13.182 ********** 2026-03-09 00:49:14.252775 | orchestrator | =============================================================================== 2026-03-09 00:49:14.252785 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 60.13s 2026-03-09 00:49:14.252794 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 7.88s 2026-03-09 00:49:14.252812 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 2.34s 2026-03-09 00:49:14.252823 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.43s 2026-03-09 00:49:14.252840 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 1.06s 2026-03-09 00:49:14.253023 | orchestrator | 2026-03-09 00:49:14 | INFO  | Task 8fa95c07-e0a0-4ffc-8349-3619badbd659 is in state STARTED 2026-03-09 00:49:14.253049 | orchestrator | 2026-03-09 00:49:14 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:49:14.254466 | orchestrator | 2026-03-09 00:49:14 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:49:14.254555 | orchestrator | 2026-03-09 00:49:14 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:17.296001 | orchestrator | 2026-03-09 00:49:17 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:49:17.301303 | orchestrator | 2026-03-09 00:49:17 | INFO  | Task 8fa95c07-e0a0-4ffc-8349-3619badbd659 is in state STARTED 2026-03-09 00:49:17.303234 | orchestrator | 2026-03-09 00:49:17 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:49:17.304154 | orchestrator | 2026-03-09 00:49:17 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:49:17.304199 | orchestrator | 2026-03-09 00:49:17 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:20.360896 | orchestrator | 2026-03-09 00:49:20.361002 | orchestrator | 2026-03-09 00:49:20.361018 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 00:49:20.361030 | orchestrator | 2026-03-09 00:49:20.361042 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 00:49:20.361054 | orchestrator | Monday 09 March 2026 00:47:31 +0000 (0:00:01.311) 0:00:01.311 ********** 2026-03-09 00:49:20.361066 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-03-09 00:49:20.361077 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-03-09 00:49:20.361088 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-03-09 00:49:20.361099 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-03-09 00:49:20.361110 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-03-09 00:49:20.361121 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-03-09 00:49:20.361132 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-03-09 00:49:20.361143 | orchestrator | 2026-03-09 00:49:20.361154 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-03-09 00:49:20.361164 | orchestrator | 2026-03-09 00:49:20.361175 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-03-09 00:49:20.361186 | orchestrator | Monday 09 March 2026 00:47:34 +0000 (0:00:02.683) 0:00:03.994 ********** 2026-03-09 00:49:20.361219 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:49:20.361233 | orchestrator | 2026-03-09 00:49:20.361245 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-03-09 00:49:20.361298 | orchestrator | Monday 09 March 2026 00:47:36 +0000 (0:00:02.097) 0:00:06.092 ********** 2026-03-09 00:49:20.361310 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:49:20.361323 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:49:20.361334 | orchestrator | ok: [testbed-manager] 2026-03-09 00:49:20.361345 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:49:20.361356 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:49:20.361551 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:49:20.361566 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:49:20.361579 | orchestrator | 2026-03-09 00:49:20.361592 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-03-09 00:49:20.361626 | orchestrator | Monday 09 March 2026 00:47:41 +0000 (0:00:04.775) 0:00:10.867 ********** 2026-03-09 00:49:20.361639 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:49:20.361651 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:49:20.361664 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:49:20.361676 | orchestrator | ok: [testbed-manager] 2026-03-09 00:49:20.361689 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:49:20.361701 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:49:20.361714 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:49:20.361726 | orchestrator | 2026-03-09 00:49:20.361739 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-03-09 00:49:20.361752 | orchestrator | Monday 09 March 2026 00:47:47 +0000 (0:00:05.731) 0:00:16.598 ********** 2026-03-09 00:49:20.361765 | orchestrator | changed: [testbed-manager] 2026-03-09 00:49:20.361778 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:49:20.361791 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:49:20.361803 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:49:20.361816 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:49:20.361826 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:49:20.361837 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:49:20.361848 | orchestrator | 2026-03-09 00:49:20.361859 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-03-09 00:49:20.361870 | orchestrator | Monday 09 March 2026 00:47:49 +0000 (0:00:02.864) 0:00:19.463 ********** 2026-03-09 00:49:20.361880 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:49:20.361891 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:49:20.361902 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:49:20.361913 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:49:20.361924 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:49:20.361934 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:49:20.361945 | orchestrator | changed: [testbed-manager] 2026-03-09 00:49:20.361956 | orchestrator | 2026-03-09 00:49:20.361967 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-03-09 00:49:20.361978 | orchestrator | Monday 09 March 2026 00:48:01 +0000 (0:00:11.641) 0:00:31.105 ********** 2026-03-09 00:49:20.361989 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:49:20.361999 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:49:20.362010 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:49:20.362095 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:49:20.362114 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:49:20.362132 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:49:20.362150 | orchestrator | changed: [testbed-manager] 2026-03-09 00:49:20.362166 | orchestrator | 2026-03-09 00:49:20.362185 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-03-09 00:49:20.362204 | orchestrator | Monday 09 March 2026 00:48:48 +0000 (0:00:46.763) 0:01:17.869 ********** 2026-03-09 00:49:20.362224 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:49:20.362238 | orchestrator | 2026-03-09 00:49:20.362249 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-03-09 00:49:20.362261 | orchestrator | Monday 09 March 2026 00:48:49 +0000 (0:00:01.350) 0:01:19.220 ********** 2026-03-09 00:49:20.362271 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-03-09 00:49:20.362283 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-03-09 00:49:20.362302 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-03-09 00:49:20.362313 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-03-09 00:49:20.362345 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-03-09 00:49:20.362356 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-03-09 00:49:20.362391 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-03-09 00:49:20.362413 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-03-09 00:49:20.362424 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-03-09 00:49:20.362435 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-03-09 00:49:20.362446 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-03-09 00:49:20.362457 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-03-09 00:49:20.362468 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-03-09 00:49:20.362479 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-03-09 00:49:20.362490 | orchestrator | 2026-03-09 00:49:20.362501 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-03-09 00:49:20.362513 | orchestrator | Monday 09 March 2026 00:48:54 +0000 (0:00:04.595) 0:01:23.815 ********** 2026-03-09 00:49:20.362524 | orchestrator | ok: [testbed-manager] 2026-03-09 00:49:20.362536 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:49:20.362546 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:49:20.362557 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:49:20.362568 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:49:20.362579 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:49:20.362590 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:49:20.362600 | orchestrator | 2026-03-09 00:49:20.362611 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-03-09 00:49:20.362622 | orchestrator | Monday 09 March 2026 00:48:55 +0000 (0:00:00.981) 0:01:24.797 ********** 2026-03-09 00:49:20.362633 | orchestrator | changed: [testbed-manager] 2026-03-09 00:49:20.362644 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:49:20.362655 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:49:20.362666 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:49:20.362677 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:49:20.362688 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:49:20.362699 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:49:20.362709 | orchestrator | 2026-03-09 00:49:20.362720 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-03-09 00:49:20.362731 | orchestrator | Monday 09 March 2026 00:48:56 +0000 (0:00:01.794) 0:01:26.591 ********** 2026-03-09 00:49:20.362742 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:49:20.362753 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:49:20.362764 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:49:20.362775 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:49:20.362786 | orchestrator | ok: [testbed-manager] 2026-03-09 00:49:20.362797 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:49:20.362807 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:49:20.362818 | orchestrator | 2026-03-09 00:49:20.362829 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-03-09 00:49:20.362840 | orchestrator | Monday 09 March 2026 00:48:59 +0000 (0:00:02.492) 0:01:29.084 ********** 2026-03-09 00:49:20.362851 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:49:20.362862 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:49:20.362872 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:49:20.362883 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:49:20.362894 | orchestrator | ok: [testbed-manager] 2026-03-09 00:49:20.362905 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:49:20.362915 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:49:20.362926 | orchestrator | 2026-03-09 00:49:20.362937 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-03-09 00:49:20.362948 | orchestrator | Monday 09 March 2026 00:49:02 +0000 (0:00:02.913) 0:01:31.998 ********** 2026-03-09 00:49:20.362959 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-03-09 00:49:20.362973 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:49:20.362991 | orchestrator | 2026-03-09 00:49:20.363002 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-03-09 00:49:20.363013 | orchestrator | Monday 09 March 2026 00:49:04 +0000 (0:00:02.320) 0:01:34.318 ********** 2026-03-09 00:49:20.363024 | orchestrator | changed: [testbed-manager] 2026-03-09 00:49:20.363035 | orchestrator | 2026-03-09 00:49:20.363046 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-03-09 00:49:20.363057 | orchestrator | Monday 09 March 2026 00:49:07 +0000 (0:00:02.657) 0:01:36.976 ********** 2026-03-09 00:49:20.363067 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:49:20.363078 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:49:20.363089 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:49:20.363100 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:49:20.363111 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:49:20.363122 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:49:20.363133 | orchestrator | changed: [testbed-manager] 2026-03-09 00:49:20.363143 | orchestrator | 2026-03-09 00:49:20.363154 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:49:20.363165 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:49:20.363178 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:49:20.363189 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:49:20.363205 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:49:20.363224 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:49:20.363236 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:49:20.363247 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:49:20.363258 | orchestrator | 2026-03-09 00:49:20.363269 | orchestrator | 2026-03-09 00:49:20.363280 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:49:20.363291 | orchestrator | Monday 09 March 2026 00:49:19 +0000 (0:00:11.887) 0:01:48.864 ********** 2026-03-09 00:49:20.363302 | orchestrator | =============================================================================== 2026-03-09 00:49:20.363313 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 46.76s 2026-03-09 00:49:20.363324 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.89s 2026-03-09 00:49:20.363335 | orchestrator | osism.services.netdata : Add repository -------------------------------- 11.64s 2026-03-09 00:49:20.363345 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 5.73s 2026-03-09 00:49:20.363356 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 4.78s 2026-03-09 00:49:20.363387 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.60s 2026-03-09 00:49:20.363399 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.91s 2026-03-09 00:49:20.363410 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.86s 2026-03-09 00:49:20.363421 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.68s 2026-03-09 00:49:20.363432 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.66s 2026-03-09 00:49:20.363442 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.49s 2026-03-09 00:49:20.363453 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 2.32s 2026-03-09 00:49:20.363471 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.10s 2026-03-09 00:49:20.363482 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.80s 2026-03-09 00:49:20.363493 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.35s 2026-03-09 00:49:20.363504 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 0.98s 2026-03-09 00:49:20.363515 | orchestrator | 2026-03-09 00:49:20 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:49:20.363527 | orchestrator | 2026-03-09 00:49:20 | INFO  | Task 8fa95c07-e0a0-4ffc-8349-3619badbd659 is in state SUCCESS 2026-03-09 00:49:20.363538 | orchestrator | 2026-03-09 00:49:20 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:49:20.363549 | orchestrator | 2026-03-09 00:49:20 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:49:20.363560 | orchestrator | 2026-03-09 00:49:20 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:23.407455 | orchestrator | 2026-03-09 00:49:23 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:49:23.407546 | orchestrator | 2026-03-09 00:49:23 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:49:23.409058 | orchestrator | 2026-03-09 00:49:23 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:49:23.409106 | orchestrator | 2026-03-09 00:49:23 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:26.472417 | orchestrator | 2026-03-09 00:49:26 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:49:26.475346 | orchestrator | 2026-03-09 00:49:26 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:49:26.480171 | orchestrator | 2026-03-09 00:49:26 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:49:26.480235 | orchestrator | 2026-03-09 00:49:26 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:29.518619 | orchestrator | 2026-03-09 00:49:29 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:49:29.519549 | orchestrator | 2026-03-09 00:49:29 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:49:29.520812 | orchestrator | 2026-03-09 00:49:29 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:49:29.520836 | orchestrator | 2026-03-09 00:49:29 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:32.561547 | orchestrator | 2026-03-09 00:49:32 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:49:32.561646 | orchestrator | 2026-03-09 00:49:32 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:49:32.562292 | orchestrator | 2026-03-09 00:49:32 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:49:32.563337 | orchestrator | 2026-03-09 00:49:32 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:35.608111 | orchestrator | 2026-03-09 00:49:35 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:49:35.609764 | orchestrator | 2026-03-09 00:49:35 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:49:35.612954 | orchestrator | 2026-03-09 00:49:35 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:49:35.612987 | orchestrator | 2026-03-09 00:49:35 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:38.668517 | orchestrator | 2026-03-09 00:49:38 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:49:38.673013 | orchestrator | 2026-03-09 00:49:38 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:49:38.676492 | orchestrator | 2026-03-09 00:49:38 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:49:38.676577 | orchestrator | 2026-03-09 00:49:38 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:41.718862 | orchestrator | 2026-03-09 00:49:41 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:49:41.719529 | orchestrator | 2026-03-09 00:49:41 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:49:41.720527 | orchestrator | 2026-03-09 00:49:41 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:49:41.720571 | orchestrator | 2026-03-09 00:49:41 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:44.762871 | orchestrator | 2026-03-09 00:49:44 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:49:44.763465 | orchestrator | 2026-03-09 00:49:44 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:49:44.767410 | orchestrator | 2026-03-09 00:49:44 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:49:44.767478 | orchestrator | 2026-03-09 00:49:44 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:47.815210 | orchestrator | 2026-03-09 00:49:47 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:49:47.815770 | orchestrator | 2026-03-09 00:49:47 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:49:47.817142 | orchestrator | 2026-03-09 00:49:47 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:49:47.817191 | orchestrator | 2026-03-09 00:49:47 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:50.873002 | orchestrator | 2026-03-09 00:49:50 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:49:50.875912 | orchestrator | 2026-03-09 00:49:50 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:49:50.879063 | orchestrator | 2026-03-09 00:49:50 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:49:50.879118 | orchestrator | 2026-03-09 00:49:50 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:53.923745 | orchestrator | 2026-03-09 00:49:53 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:49:53.924752 | orchestrator | 2026-03-09 00:49:53 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:49:53.927300 | orchestrator | 2026-03-09 00:49:53 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:49:53.927884 | orchestrator | 2026-03-09 00:49:53 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:56.971780 | orchestrator | 2026-03-09 00:49:56 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:49:56.973542 | orchestrator | 2026-03-09 00:49:56 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:49:56.974793 | orchestrator | 2026-03-09 00:49:56 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:49:56.974841 | orchestrator | 2026-03-09 00:49:56 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:00.025510 | orchestrator | 2026-03-09 00:50:00 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:50:00.025912 | orchestrator | 2026-03-09 00:50:00 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:50:00.027109 | orchestrator | 2026-03-09 00:50:00 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:50:00.027139 | orchestrator | 2026-03-09 00:50:00 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:03.064905 | orchestrator | 2026-03-09 00:50:03 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:50:03.066831 | orchestrator | 2026-03-09 00:50:03 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:50:03.066954 | orchestrator | 2026-03-09 00:50:03 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:50:03.066969 | orchestrator | 2026-03-09 00:50:03 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:06.121808 | orchestrator | 2026-03-09 00:50:06 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:50:06.121878 | orchestrator | 2026-03-09 00:50:06 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:50:06.122072 | orchestrator | 2026-03-09 00:50:06 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:50:06.122082 | orchestrator | 2026-03-09 00:50:06 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:09.161691 | orchestrator | 2026-03-09 00:50:09 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:50:09.162249 | orchestrator | 2026-03-09 00:50:09 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:50:09.163178 | orchestrator | 2026-03-09 00:50:09 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:50:09.163237 | orchestrator | 2026-03-09 00:50:09 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:12.216279 | orchestrator | 2026-03-09 00:50:12 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:50:12.216767 | orchestrator | 2026-03-09 00:50:12 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:50:12.218003 | orchestrator | 2026-03-09 00:50:12 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:50:12.218110 | orchestrator | 2026-03-09 00:50:12 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:15.274682 | orchestrator | 2026-03-09 00:50:15 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state STARTED 2026-03-09 00:50:15.275427 | orchestrator | 2026-03-09 00:50:15 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:50:15.278378 | orchestrator | 2026-03-09 00:50:15 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:50:15.278435 | orchestrator | 2026-03-09 00:50:15 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:18.354145 | orchestrator | 2026-03-09 00:50:18 | INFO  | Task ef64f055-bd16-48cb-b2ad-955342269b2e is in state SUCCESS 2026-03-09 00:50:18.356545 | orchestrator | 2026-03-09 00:50:18.356621 | orchestrator | 2026-03-09 00:50:18.356631 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-09 00:50:18.356640 | orchestrator | 2026-03-09 00:50:18.356646 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-09 00:50:18.356652 | orchestrator | Monday 09 March 2026 00:47:22 +0000 (0:00:00.308) 0:00:00.308 ********** 2026-03-09 00:50:18.356659 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:50:18.356684 | orchestrator | 2026-03-09 00:50:18.356690 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-09 00:50:18.356695 | orchestrator | Monday 09 March 2026 00:47:23 +0000 (0:00:01.394) 0:00:01.702 ********** 2026-03-09 00:50:18.356701 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-09 00:50:18.356707 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-09 00:50:18.356714 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-09 00:50:18.356721 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-09 00:50:18.356727 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-09 00:50:18.356733 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-09 00:50:18.356739 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-09 00:50:18.356745 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-09 00:50:18.356752 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-09 00:50:18.356762 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-09 00:50:18.356770 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-09 00:50:18.356777 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-09 00:50:18.356783 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-09 00:50:18.356788 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-09 00:50:18.356794 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-09 00:50:18.356800 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-09 00:50:18.356805 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-09 00:50:18.356812 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-09 00:50:18.356818 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-09 00:50:18.356824 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-09 00:50:18.356830 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-09 00:50:18.357714 | orchestrator | 2026-03-09 00:50:18.357737 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-09 00:50:18.357743 | orchestrator | Monday 09 March 2026 00:47:28 +0000 (0:00:04.931) 0:00:06.634 ********** 2026-03-09 00:50:18.357750 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:50:18.357757 | orchestrator | 2026-03-09 00:50:18.357763 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-09 00:50:18.357768 | orchestrator | Monday 09 March 2026 00:47:30 +0000 (0:00:01.497) 0:00:08.131 ********** 2026-03-09 00:50:18.357777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:50:18.357786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:50:18.357829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:50:18.357836 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:50:18.357847 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.357855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.357860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.357866 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:50:18.357872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.357893 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.357907 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:50:18.357914 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:50:18.357920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.357931 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.357938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.357945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.357956 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.357967 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.357973 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.357979 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.357988 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.357993 | orchestrator | 2026-03-09 00:50:18.357999 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-09 00:50:18.358006 | orchestrator | Monday 09 March 2026 00:47:37 +0000 (0:00:07.077) 0:00:15.209 ********** 2026-03-09 00:50:18.358062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:50:18.358072 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:50:18.358079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.358091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:50:18.358105 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.358112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.358119 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:50:18.358129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.358136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:50:18.358142 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:50:18.358149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.358160 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:50:18.358166 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.358172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.358183 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:50:18.358190 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:50:18.358196 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.358207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.358214 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:50:18.358221 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.358228 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.358239 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:50:18.358245 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.358252 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:50:18.358259 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:50:18.358270 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.358276 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.358282 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:50:18.358288 | orchestrator | 2026-03-09 00:50:18.358294 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-09 00:50:18.358300 | orchestrator | Monday 09 March 2026 00:47:43 +0000 (0:00:06.434) 0:00:21.643 ********** 2026-03-09 00:50:18.358307 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:50:18.358317 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.358323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:50:18.358354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.358360 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.358368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:50:18.358383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:50:18.358390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.358396 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:50:18.358402 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:50:18.358413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.358424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.358430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.358437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.358442 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.358453 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:50:18.358460 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:50:18.358466 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.358472 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:50:18.358478 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:50:18.358484 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:50:18.358493 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.358504 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:50:18.358511 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.358518 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.358524 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:50:18.358531 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.358536 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:50:18.358542 | orchestrator | 2026-03-09 00:50:18.358549 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-03-09 00:50:18.358555 | orchestrator | Monday 09 March 2026 00:47:52 +0000 (0:00:09.338) 0:00:30.982 ********** 2026-03-09 00:50:18.358561 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:50:18.358567 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:50:18.358573 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:50:18.358580 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:50:18.358586 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:50:18.358595 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:50:18.358601 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:50:18.358606 | orchestrator | 2026-03-09 00:50:18.358612 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-09 00:50:18.358618 | orchestrator | Monday 09 March 2026 00:47:55 +0000 (0:00:02.288) 0:00:33.271 ********** 2026-03-09 00:50:18.358623 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:50:18.358629 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:50:18.358635 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:50:18.358641 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:50:18.358647 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:50:18.358653 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:50:18.358659 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:50:18.358665 | orchestrator | 2026-03-09 00:50:18.358671 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-09 00:50:18.358682 | orchestrator | Monday 09 March 2026 00:47:56 +0000 (0:00:01.465) 0:00:34.738 ********** 2026-03-09 00:50:18.358690 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:50:18.358695 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:50:18.358702 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:50:18.358707 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:50:18.358712 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:50:18.358719 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:50:18.358725 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:50:18.358731 | orchestrator | 2026-03-09 00:50:18.358738 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-03-09 00:50:18.358744 | orchestrator | Monday 09 March 2026 00:47:58 +0000 (0:00:01.414) 0:00:36.153 ********** 2026-03-09 00:50:18.358751 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:50:18.358757 | orchestrator | changed: [testbed-manager] 2026-03-09 00:50:18.358765 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:50:18.358773 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:50:18.358781 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:50:18.358788 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:50:18.358796 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:50:18.358802 | orchestrator | 2026-03-09 00:50:18.358811 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-09 00:50:18.358818 | orchestrator | Monday 09 March 2026 00:48:02 +0000 (0:00:04.227) 0:00:40.381 ********** 2026-03-09 00:50:18.358824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:50:18.358831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:50:18.358837 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:50:18.358844 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:50:18.358850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:50:18.358867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.358882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.358893 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.358899 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:50:18.358907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.358913 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.358919 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.358938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.358944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.358950 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.358960 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:50:18.358966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.358972 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.358979 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.358986 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.359004 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.359010 | orchestrator | 2026-03-09 00:50:18.359017 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-09 00:50:18.359023 | orchestrator | Monday 09 March 2026 00:48:11 +0000 (0:00:09.596) 0:00:49.977 ********** 2026-03-09 00:50:18.359029 | orchestrator | [WARNING]: Skipped 2026-03-09 00:50:18.359037 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-09 00:50:18.359043 | orchestrator | to this access issue: 2026-03-09 00:50:18.359049 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-09 00:50:18.359055 | orchestrator | directory 2026-03-09 00:50:18.359062 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-09 00:50:18.359067 | orchestrator | 2026-03-09 00:50:18.359073 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-09 00:50:18.359079 | orchestrator | Monday 09 March 2026 00:48:14 +0000 (0:00:02.715) 0:00:52.692 ********** 2026-03-09 00:50:18.359086 | orchestrator | [WARNING]: Skipped 2026-03-09 00:50:18.359092 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-09 00:50:18.359098 | orchestrator | to this access issue: 2026-03-09 00:50:18.359105 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-09 00:50:18.359111 | orchestrator | directory 2026-03-09 00:50:18.359117 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-09 00:50:18.359123 | orchestrator | 2026-03-09 00:50:18.359130 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-09 00:50:18.359136 | orchestrator | Monday 09 March 2026 00:48:16 +0000 (0:00:01.825) 0:00:54.517 ********** 2026-03-09 00:50:18.359142 | orchestrator | [WARNING]: Skipped 2026-03-09 00:50:18.359148 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-09 00:50:18.359153 | orchestrator | to this access issue: 2026-03-09 00:50:18.359164 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-09 00:50:18.359171 | orchestrator | directory 2026-03-09 00:50:18.359176 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-09 00:50:18.359182 | orchestrator | 2026-03-09 00:50:18.359187 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-09 00:50:18.359193 | orchestrator | Monday 09 March 2026 00:48:17 +0000 (0:00:01.501) 0:00:56.018 ********** 2026-03-09 00:50:18.359200 | orchestrator | [WARNING]: Skipped 2026-03-09 00:50:18.359205 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-09 00:50:18.359211 | orchestrator | to this access issue: 2026-03-09 00:50:18.359216 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-09 00:50:18.359221 | orchestrator | directory 2026-03-09 00:50:18.359227 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-09 00:50:18.359232 | orchestrator | 2026-03-09 00:50:18.359238 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-09 00:50:18.359244 | orchestrator | Monday 09 March 2026 00:48:19 +0000 (0:00:01.462) 0:00:57.481 ********** 2026-03-09 00:50:18.359249 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:50:18.359254 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:50:18.359260 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:50:18.359272 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:50:18.359278 | orchestrator | changed: [testbed-manager] 2026-03-09 00:50:18.359284 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:50:18.359291 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:50:18.359297 | orchestrator | 2026-03-09 00:50:18.359304 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-09 00:50:18.359310 | orchestrator | Monday 09 March 2026 00:48:27 +0000 (0:00:08.223) 0:01:05.705 ********** 2026-03-09 00:50:18.359317 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-09 00:50:18.359418 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-09 00:50:18.359435 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-09 00:50:18.359445 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-09 00:50:18.359455 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-09 00:50:18.359470 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-09 00:50:18.359480 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-09 00:50:18.359486 | orchestrator | 2026-03-09 00:50:18.359492 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-09 00:50:18.359498 | orchestrator | Monday 09 March 2026 00:48:32 +0000 (0:00:04.434) 0:01:10.139 ********** 2026-03-09 00:50:18.359504 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:50:18.359510 | orchestrator | changed: [testbed-manager] 2026-03-09 00:50:18.359516 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:50:18.359522 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:50:18.359529 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:50:18.359535 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:50:18.359541 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:50:18.359547 | orchestrator | 2026-03-09 00:50:18.359553 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-09 00:50:18.359559 | orchestrator | Monday 09 March 2026 00:48:35 +0000 (0:00:03.358) 0:01:13.497 ********** 2026-03-09 00:50:18.359575 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:50:18.359583 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.359595 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:50:18.359608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.359615 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:50:18.359621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.359629 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.359636 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.359646 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:50:18.359653 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.359665 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.359676 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:50:18.359682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.359687 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.359693 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.359699 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:50:18.359710 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.359716 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:50:18.359732 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.359739 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.359745 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.359750 | orchestrator | 2026-03-09 00:50:18.359756 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-09 00:50:18.359761 | orchestrator | Monday 09 March 2026 00:48:38 +0000 (0:00:02.643) 0:01:16.141 ********** 2026-03-09 00:50:18.359767 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-09 00:50:18.359773 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-09 00:50:18.359778 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-09 00:50:18.359784 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-09 00:50:18.359790 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-09 00:50:18.359796 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-09 00:50:18.359802 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-09 00:50:18.359809 | orchestrator | 2026-03-09 00:50:18.359815 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-09 00:50:18.359821 | orchestrator | Monday 09 March 2026 00:48:41 +0000 (0:00:03.231) 0:01:19.372 ********** 2026-03-09 00:50:18.359827 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-09 00:50:18.359834 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-09 00:50:18.359840 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-09 00:50:18.359846 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-09 00:50:18.359852 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-09 00:50:18.359858 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-09 00:50:18.359864 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-09 00:50:18.359870 | orchestrator | 2026-03-09 00:50:18.359881 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-03-09 00:50:18.359887 | orchestrator | Monday 09 March 2026 00:48:44 +0000 (0:00:03.159) 0:01:22.531 ********** 2026-03-09 00:50:18.359893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:50:18.359905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:50:18.359915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:50:18.359922 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:50:18.359930 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:50:18.359935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.359939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.359992 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:50:18.360001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.360008 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.360013 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:50:18.360017 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.360022 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.360027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.360036 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.360044 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.360049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.360053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.360058 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.360063 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.360067 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:50:18.360072 | orchestrator | 2026-03-09 00:50:18.360076 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-03-09 00:50:18.360081 | orchestrator | Monday 09 March 2026 00:48:48 +0000 (0:00:04.148) 0:01:26.679 ********** 2026-03-09 00:50:18.360085 | orchestrator | changed: [testbed-manager] => { 2026-03-09 00:50:18.360089 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:50:18.360094 | orchestrator | } 2026-03-09 00:50:18.360098 | orchestrator | changed: [testbed-node-0] => { 2026-03-09 00:50:18.360103 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:50:18.360108 | orchestrator | } 2026-03-09 00:50:18.360112 | orchestrator | changed: [testbed-node-1] => { 2026-03-09 00:50:18.360116 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:50:18.360120 | orchestrator | } 2026-03-09 00:50:18.360125 | orchestrator | changed: [testbed-node-2] => { 2026-03-09 00:50:18.360132 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:50:18.360137 | orchestrator | } 2026-03-09 00:50:18.360140 | orchestrator | changed: [testbed-node-3] => { 2026-03-09 00:50:18.360144 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:50:18.360148 | orchestrator | } 2026-03-09 00:50:18.360152 | orchestrator | changed: [testbed-node-4] => { 2026-03-09 00:50:18.360156 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:50:18.360159 | orchestrator | } 2026-03-09 00:50:18.360166 | orchestrator | changed: [testbed-node-5] => { 2026-03-09 00:50:18.360170 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:50:18.360174 | orchestrator | } 2026-03-09 00:50:18.360178 | orchestrator | 2026-03-09 00:50:18.360182 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-09 00:50:18.360185 | orchestrator | Monday 09 March 2026 00:48:49 +0000 (0:00:01.063) 0:01:27.743 ********** 2026-03-09 00:50:18.360193 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:50:18.360198 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.360202 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.360206 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:50:18.360212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:50:18.360216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.360220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.360227 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:50:18.360231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:50:18.360236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.360241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.360246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:50:18.360252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.360257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.360260 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:50:18.360264 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:50:18.360272 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.360276 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.360280 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:50:18.360283 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:50:18.360289 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:50:18.360293 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.360297 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.360301 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:50:18.360309 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:50:18.360313 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.360320 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:50:18.360368 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:50:18.360375 | orchestrator | 2026-03-09 00:50:18.360381 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-03-09 00:50:18.360387 | orchestrator | Monday 09 March 2026 00:48:51 +0000 (0:00:01.984) 0:01:29.728 ********** 2026-03-09 00:50:18.360393 | orchestrator | changed: [testbed-manager] 2026-03-09 00:50:18.360399 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:50:18.360402 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:50:18.360406 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:50:18.360410 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:50:18.360414 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:50:18.360417 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:50:18.360421 | orchestrator | 2026-03-09 00:50:18.360425 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-03-09 00:50:18.360428 | orchestrator | Monday 09 March 2026 00:48:53 +0000 (0:00:01.830) 0:01:31.558 ********** 2026-03-09 00:50:18.360432 | orchestrator | changed: [testbed-manager] 2026-03-09 00:50:18.360436 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:50:18.360440 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:50:18.360443 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:50:18.360447 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:50:18.360451 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:50:18.360454 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:50:18.360458 | orchestrator | 2026-03-09 00:50:18.360462 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-09 00:50:18.360465 | orchestrator | Monday 09 March 2026 00:48:54 +0000 (0:00:01.098) 0:01:32.657 ********** 2026-03-09 00:50:18.360469 | orchestrator | 2026-03-09 00:50:18.360473 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-09 00:50:18.360477 | orchestrator | Monday 09 March 2026 00:48:54 +0000 (0:00:00.065) 0:01:32.722 ********** 2026-03-09 00:50:18.360480 | orchestrator | 2026-03-09 00:50:18.360484 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-09 00:50:18.360488 | orchestrator | Monday 09 March 2026 00:48:54 +0000 (0:00:00.062) 0:01:32.784 ********** 2026-03-09 00:50:18.360492 | orchestrator | 2026-03-09 00:50:18.360498 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-09 00:50:18.360503 | orchestrator | Monday 09 March 2026 00:48:54 +0000 (0:00:00.202) 0:01:32.987 ********** 2026-03-09 00:50:18.360506 | orchestrator | 2026-03-09 00:50:18.360510 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-09 00:50:18.360514 | orchestrator | Monday 09 March 2026 00:48:54 +0000 (0:00:00.060) 0:01:33.047 ********** 2026-03-09 00:50:18.360517 | orchestrator | 2026-03-09 00:50:18.360521 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-09 00:50:18.360525 | orchestrator | Monday 09 March 2026 00:48:54 +0000 (0:00:00.059) 0:01:33.107 ********** 2026-03-09 00:50:18.360529 | orchestrator | 2026-03-09 00:50:18.360532 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-09 00:50:18.360536 | orchestrator | Monday 09 March 2026 00:48:55 +0000 (0:00:00.086) 0:01:33.193 ********** 2026-03-09 00:50:18.360540 | orchestrator | 2026-03-09 00:50:18.360544 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-09 00:50:18.360547 | orchestrator | Monday 09 March 2026 00:48:55 +0000 (0:00:00.094) 0:01:33.288 ********** 2026-03-09 00:50:18.360551 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:50:18.360559 | orchestrator | changed: [testbed-manager] 2026-03-09 00:50:18.360562 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:50:18.360566 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:50:18.360570 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:50:18.360574 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:50:18.360577 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:50:18.360581 | orchestrator | 2026-03-09 00:50:18.360585 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-09 00:50:18.360589 | orchestrator | Monday 09 March 2026 00:49:30 +0000 (0:00:35.558) 0:02:08.847 ********** 2026-03-09 00:50:18.360593 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:50:18.360599 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:50:18.360605 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:50:18.360611 | orchestrator | changed: [testbed-manager] 2026-03-09 00:50:18.360616 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:50:18.360626 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:50:18.360631 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:50:18.360637 | orchestrator | 2026-03-09 00:50:18.360644 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-09 00:50:18.360650 | orchestrator | Monday 09 March 2026 00:50:03 +0000 (0:00:32.390) 0:02:41.238 ********** 2026-03-09 00:50:18.360656 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:50:18.360662 | orchestrator | ok: [testbed-manager] 2026-03-09 00:50:18.360669 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:50:18.360674 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:50:18.360678 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:50:18.360682 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:50:18.360686 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:50:18.360689 | orchestrator | 2026-03-09 00:50:18.360693 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-09 00:50:18.360697 | orchestrator | Monday 09 March 2026 00:50:05 +0000 (0:00:02.132) 0:02:43.371 ********** 2026-03-09 00:50:18.360701 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:50:18.360704 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:50:18.360708 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:50:18.360712 | orchestrator | changed: [testbed-manager] 2026-03-09 00:50:18.360715 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:50:18.360719 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:50:18.360723 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:50:18.360727 | orchestrator | 2026-03-09 00:50:18.360731 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:50:18.360735 | orchestrator | testbed-manager : ok=24  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-09 00:50:18.360739 | orchestrator | testbed-node-0 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-09 00:50:18.360743 | orchestrator | testbed-node-1 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-09 00:50:18.360747 | orchestrator | testbed-node-2 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-09 00:50:18.360751 | orchestrator | testbed-node-3 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-09 00:50:18.360754 | orchestrator | testbed-node-4 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-09 00:50:18.360758 | orchestrator | testbed-node-5 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-09 00:50:18.360762 | orchestrator | 2026-03-09 00:50:18.360766 | orchestrator | 2026-03-09 00:50:18.360769 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:50:18.360777 | orchestrator | Monday 09 March 2026 00:50:15 +0000 (0:00:10.476) 0:02:53.848 ********** 2026-03-09 00:50:18.360781 | orchestrator | =============================================================================== 2026-03-09 00:50:18.360785 | orchestrator | common : Restart fluentd container ------------------------------------- 35.56s 2026-03-09 00:50:18.360789 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 32.39s 2026-03-09 00:50:18.360792 | orchestrator | common : Restart cron container ---------------------------------------- 10.48s 2026-03-09 00:50:18.360796 | orchestrator | common : Copying over config.json files for services -------------------- 9.60s 2026-03-09 00:50:18.360802 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 9.34s 2026-03-09 00:50:18.360806 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 8.22s 2026-03-09 00:50:18.360809 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 7.08s 2026-03-09 00:50:18.360813 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 6.43s 2026-03-09 00:50:18.360817 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.93s 2026-03-09 00:50:18.360821 | orchestrator | common : Copying over cron logrotate config file ------------------------ 4.43s 2026-03-09 00:50:18.360824 | orchestrator | common : Copying over kolla.target -------------------------------------- 4.23s 2026-03-09 00:50:18.360828 | orchestrator | service-check-containers : common | Check containers -------------------- 4.15s 2026-03-09 00:50:18.360833 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.36s 2026-03-09 00:50:18.360839 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.23s 2026-03-09 00:50:18.360846 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.16s 2026-03-09 00:50:18.360850 | orchestrator | common : Find custom fluentd input config files ------------------------- 2.71s 2026-03-09 00:50:18.360854 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.64s 2026-03-09 00:50:18.360857 | orchestrator | common : Ensure /var/log/journal exists on EL10 systems ----------------- 2.29s 2026-03-09 00:50:18.360861 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.13s 2026-03-09 00:50:18.360865 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.98s 2026-03-09 00:50:18.363718 | orchestrator | 2026-03-09 00:50:18 | INFO  | Task db179a46-7c13-4ccf-bc88-497404a40dbb is in state STARTED 2026-03-09 00:50:18.366527 | orchestrator | 2026-03-09 00:50:18 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:50:18.370611 | orchestrator | 2026-03-09 00:50:18 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:50:18.374139 | orchestrator | 2026-03-09 00:50:18 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:50:18.376197 | orchestrator | 2026-03-09 00:50:18 | INFO  | Task 2e7da1a5-2a08-4ea5-98dd-3594514f7b88 is in state STARTED 2026-03-09 00:50:18.380702 | orchestrator | 2026-03-09 00:50:18 | INFO  | Task 2ccac33e-508e-4477-be93-78e8669c3f31 is in state STARTED 2026-03-09 00:50:18.381451 | orchestrator | 2026-03-09 00:50:18 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:21.472172 | orchestrator | 2026-03-09 00:50:21 | INFO  | Task db179a46-7c13-4ccf-bc88-497404a40dbb is in state STARTED 2026-03-09 00:50:21.473180 | orchestrator | 2026-03-09 00:50:21 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:50:21.478962 | orchestrator | 2026-03-09 00:50:21 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:50:21.486649 | orchestrator | 2026-03-09 00:50:21 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:50:21.487510 | orchestrator | 2026-03-09 00:50:21 | INFO  | Task 2e7da1a5-2a08-4ea5-98dd-3594514f7b88 is in state STARTED 2026-03-09 00:50:21.489452 | orchestrator | 2026-03-09 00:50:21 | INFO  | Task 2ccac33e-508e-4477-be93-78e8669c3f31 is in state STARTED 2026-03-09 00:50:21.489509 | orchestrator | 2026-03-09 00:50:21 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:24.546803 | orchestrator | 2026-03-09 00:50:24 | INFO  | Task db179a46-7c13-4ccf-bc88-497404a40dbb is in state STARTED 2026-03-09 00:50:24.551031 | orchestrator | 2026-03-09 00:50:24 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:50:24.554251 | orchestrator | 2026-03-09 00:50:24 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:50:24.554466 | orchestrator | 2026-03-09 00:50:24 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:50:24.554488 | orchestrator | 2026-03-09 00:50:24 | INFO  | Task 2e7da1a5-2a08-4ea5-98dd-3594514f7b88 is in state STARTED 2026-03-09 00:50:24.554512 | orchestrator | 2026-03-09 00:50:24 | INFO  | Task 2ccac33e-508e-4477-be93-78e8669c3f31 is in state STARTED 2026-03-09 00:50:24.554526 | orchestrator | 2026-03-09 00:50:24 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:27.589052 | orchestrator | 2026-03-09 00:50:27 | INFO  | Task db179a46-7c13-4ccf-bc88-497404a40dbb is in state STARTED 2026-03-09 00:50:27.589752 | orchestrator | 2026-03-09 00:50:27 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:50:27.590617 | orchestrator | 2026-03-09 00:50:27 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:50:27.591352 | orchestrator | 2026-03-09 00:50:27 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:50:27.593374 | orchestrator | 2026-03-09 00:50:27 | INFO  | Task 2e7da1a5-2a08-4ea5-98dd-3594514f7b88 is in state STARTED 2026-03-09 00:50:27.594425 | orchestrator | 2026-03-09 00:50:27 | INFO  | Task 2ccac33e-508e-4477-be93-78e8669c3f31 is in state STARTED 2026-03-09 00:50:27.594451 | orchestrator | 2026-03-09 00:50:27 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:30.725998 | orchestrator | 2026-03-09 00:50:30 | INFO  | Task db179a46-7c13-4ccf-bc88-497404a40dbb is in state STARTED 2026-03-09 00:50:30.726156 | orchestrator | 2026-03-09 00:50:30 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:50:30.726172 | orchestrator | 2026-03-09 00:50:30 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:50:30.726183 | orchestrator | 2026-03-09 00:50:30 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:50:30.726193 | orchestrator | 2026-03-09 00:50:30 | INFO  | Task 2e7da1a5-2a08-4ea5-98dd-3594514f7b88 is in state STARTED 2026-03-09 00:50:30.726203 | orchestrator | 2026-03-09 00:50:30 | INFO  | Task 2ccac33e-508e-4477-be93-78e8669c3f31 is in state STARTED 2026-03-09 00:50:30.726219 | orchestrator | 2026-03-09 00:50:30 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:33.783355 | orchestrator | 2026-03-09 00:50:33 | INFO  | Task db179a46-7c13-4ccf-bc88-497404a40dbb is in state STARTED 2026-03-09 00:50:33.788846 | orchestrator | 2026-03-09 00:50:33 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:50:33.790957 | orchestrator | 2026-03-09 00:50:33 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:50:33.793651 | orchestrator | 2026-03-09 00:50:33 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:50:33.794579 | orchestrator | 2026-03-09 00:50:33 | INFO  | Task 2e7da1a5-2a08-4ea5-98dd-3594514f7b88 is in state STARTED 2026-03-09 00:50:33.799906 | orchestrator | 2026-03-09 00:50:33 | INFO  | Task 2ccac33e-508e-4477-be93-78e8669c3f31 is in state STARTED 2026-03-09 00:50:33.800572 | orchestrator | 2026-03-09 00:50:33 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:36.850823 | orchestrator | 2026-03-09 00:50:36 | INFO  | Task db179a46-7c13-4ccf-bc88-497404a40dbb is in state STARTED 2026-03-09 00:50:36.852682 | orchestrator | 2026-03-09 00:50:36 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:50:36.854215 | orchestrator | 2026-03-09 00:50:36 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:50:36.858940 | orchestrator | 2026-03-09 00:50:36 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:50:36.859514 | orchestrator | 2026-03-09 00:50:36 | INFO  | Task 2e7da1a5-2a08-4ea5-98dd-3594514f7b88 is in state STARTED 2026-03-09 00:50:36.861499 | orchestrator | 2026-03-09 00:50:36 | INFO  | Task 2ccac33e-508e-4477-be93-78e8669c3f31 is in state STARTED 2026-03-09 00:50:36.861632 | orchestrator | 2026-03-09 00:50:36 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:39.919553 | orchestrator | 2026-03-09 00:50:39 | INFO  | Task db179a46-7c13-4ccf-bc88-497404a40dbb is in state STARTED 2026-03-09 00:50:39.924566 | orchestrator | 2026-03-09 00:50:39 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:50:39.926689 | orchestrator | 2026-03-09 00:50:39 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:50:39.929648 | orchestrator | 2026-03-09 00:50:39 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:50:39.933914 | orchestrator | 2026-03-09 00:50:39 | INFO  | Task 2e7da1a5-2a08-4ea5-98dd-3594514f7b88 is in state STARTED 2026-03-09 00:50:39.936697 | orchestrator | 2026-03-09 00:50:39 | INFO  | Task 2ccac33e-508e-4477-be93-78e8669c3f31 is in state STARTED 2026-03-09 00:50:39.936751 | orchestrator | 2026-03-09 00:50:39 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:43.000937 | orchestrator | 2026-03-09 00:50:42 | INFO  | Task db179a46-7c13-4ccf-bc88-497404a40dbb is in state STARTED 2026-03-09 00:50:43.003593 | orchestrator | 2026-03-09 00:50:43 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:50:43.007894 | orchestrator | 2026-03-09 00:50:43 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:50:43.010001 | orchestrator | 2026-03-09 00:50:43 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:50:43.012974 | orchestrator | 2026-03-09 00:50:43 | INFO  | Task 2e7da1a5-2a08-4ea5-98dd-3594514f7b88 is in state STARTED 2026-03-09 00:50:43.014721 | orchestrator | 2026-03-09 00:50:43 | INFO  | Task 2ccac33e-508e-4477-be93-78e8669c3f31 is in state STARTED 2026-03-09 00:50:43.017278 | orchestrator | 2026-03-09 00:50:43 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:46.088046 | orchestrator | 2026-03-09 00:50:46 | INFO  | Task db179a46-7c13-4ccf-bc88-497404a40dbb is in state STARTED 2026-03-09 00:50:46.088178 | orchestrator | 2026-03-09 00:50:46 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:50:46.088208 | orchestrator | 2026-03-09 00:50:46 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:50:46.088225 | orchestrator | 2026-03-09 00:50:46 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:50:46.088237 | orchestrator | 2026-03-09 00:50:46 | INFO  | Task 2e7da1a5-2a08-4ea5-98dd-3594514f7b88 is in state STARTED 2026-03-09 00:50:46.088275 | orchestrator | 2026-03-09 00:50:46 | INFO  | Task 2ccac33e-508e-4477-be93-78e8669c3f31 is in state STARTED 2026-03-09 00:50:46.088379 | orchestrator | 2026-03-09 00:50:46 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:49.161980 | orchestrator | 2026-03-09 00:50:49 | INFO  | Task db179a46-7c13-4ccf-bc88-497404a40dbb is in state STARTED 2026-03-09 00:50:49.162141 | orchestrator | 2026-03-09 00:50:49 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:50:49.162157 | orchestrator | 2026-03-09 00:50:49 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:50:49.162167 | orchestrator | 2026-03-09 00:50:49 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:50:49.162177 | orchestrator | 2026-03-09 00:50:49 | INFO  | Task 2e7da1a5-2a08-4ea5-98dd-3594514f7b88 is in state STARTED 2026-03-09 00:50:49.162187 | orchestrator | 2026-03-09 00:50:49 | INFO  | Task 2ccac33e-508e-4477-be93-78e8669c3f31 is in state STARTED 2026-03-09 00:50:49.162197 | orchestrator | 2026-03-09 00:50:49 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:52.190631 | orchestrator | 2026-03-09 00:50:52 | INFO  | Task db179a46-7c13-4ccf-bc88-497404a40dbb is in state STARTED 2026-03-09 00:50:52.191067 | orchestrator | 2026-03-09 00:50:52 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:50:52.191907 | orchestrator | 2026-03-09 00:50:52 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:50:52.192635 | orchestrator | 2026-03-09 00:50:52 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:50:52.193359 | orchestrator | 2026-03-09 00:50:52 | INFO  | Task 2e7da1a5-2a08-4ea5-98dd-3594514f7b88 is in state STARTED 2026-03-09 00:50:52.194402 | orchestrator | 2026-03-09 00:50:52 | INFO  | Task 2ccac33e-508e-4477-be93-78e8669c3f31 is in state STARTED 2026-03-09 00:50:52.194435 | orchestrator | 2026-03-09 00:50:52 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:55.237825 | orchestrator | 2026-03-09 00:50:55 | INFO  | Task db179a46-7c13-4ccf-bc88-497404a40dbb is in state STARTED 2026-03-09 00:50:55.238576 | orchestrator | 2026-03-09 00:50:55 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:50:55.242918 | orchestrator | 2026-03-09 00:50:55 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:50:55.244189 | orchestrator | 2026-03-09 00:50:55 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:50:55.247491 | orchestrator | 2026-03-09 00:50:55 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:50:55.249000 | orchestrator | 2026-03-09 00:50:55.249046 | orchestrator | 2026-03-09 00:50:55.249055 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 00:50:55.249062 | orchestrator | 2026-03-09 00:50:55.249068 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 00:50:55.249075 | orchestrator | Monday 09 March 2026 00:50:28 +0000 (0:00:00.753) 0:00:00.753 ********** 2026-03-09 00:50:55.249082 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:50:55.249089 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:50:55.249095 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:50:55.249102 | orchestrator | 2026-03-09 00:50:55.249109 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 00:50:55.249115 | orchestrator | Monday 09 March 2026 00:50:29 +0000 (0:00:00.819) 0:00:01.573 ********** 2026-03-09 00:50:55.249122 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-09 00:50:55.249152 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-09 00:50:55.249160 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-09 00:50:55.249165 | orchestrator | 2026-03-09 00:50:55.249172 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-09 00:50:55.249179 | orchestrator | 2026-03-09 00:50:55.249186 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-09 00:50:55.249192 | orchestrator | Monday 09 March 2026 00:50:31 +0000 (0:00:01.314) 0:00:02.887 ********** 2026-03-09 00:50:55.249198 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:50:55.249205 | orchestrator | 2026-03-09 00:50:55.249211 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-09 00:50:55.249217 | orchestrator | Monday 09 March 2026 00:50:32 +0000 (0:00:01.230) 0:00:04.117 ********** 2026-03-09 00:50:55.249223 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-09 00:50:55.249231 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-09 00:50:55.249237 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-09 00:50:55.249243 | orchestrator | 2026-03-09 00:50:55.249250 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-09 00:50:55.249256 | orchestrator | Monday 09 March 2026 00:50:33 +0000 (0:00:01.284) 0:00:05.403 ********** 2026-03-09 00:50:55.249263 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-09 00:50:55.249269 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-09 00:50:55.249340 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-09 00:50:55.249347 | orchestrator | 2026-03-09 00:50:55.249353 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-03-09 00:50:55.249359 | orchestrator | Monday 09 March 2026 00:50:37 +0000 (0:00:03.657) 0:00:09.061 ********** 2026-03-09 00:50:55.249384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-09 00:50:55.249394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-09 00:50:55.249413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-09 00:50:55.249429 | orchestrator | 2026-03-09 00:50:55.249435 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-03-09 00:50:55.249442 | orchestrator | Monday 09 March 2026 00:50:39 +0000 (0:00:01.877) 0:00:10.939 ********** 2026-03-09 00:50:55.249449 | orchestrator | changed: [testbed-node-0] => { 2026-03-09 00:50:55.249460 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:50:55.249467 | orchestrator | } 2026-03-09 00:50:55.249474 | orchestrator | changed: [testbed-node-1] => { 2026-03-09 00:50:55.249480 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:50:55.249486 | orchestrator | } 2026-03-09 00:50:55.249492 | orchestrator | changed: [testbed-node-2] => { 2026-03-09 00:50:55.249498 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:50:55.249504 | orchestrator | } 2026-03-09 00:50:55.249510 | orchestrator | 2026-03-09 00:50:55.249516 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-09 00:50:55.249523 | orchestrator | Monday 09 March 2026 00:50:40 +0000 (0:00:01.306) 0:00:12.245 ********** 2026-03-09 00:50:55.249530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-09 00:50:55.249537 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:50:55.249548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-09 00:50:55.249554 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:50:55.249561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-09 00:50:55.249567 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:50:55.249574 | orchestrator | 2026-03-09 00:50:55.249580 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-09 00:50:55.249586 | orchestrator | Monday 09 March 2026 00:50:44 +0000 (0:00:04.110) 0:00:16.355 ********** 2026-03-09 00:50:55.249597 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:50:55.249603 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:50:55.249609 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:50:55.249615 | orchestrator | 2026-03-09 00:50:55.249621 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:50:55.249629 | orchestrator | testbed-node-0 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:50:55.249637 | orchestrator | testbed-node-1 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:50:55.249643 | orchestrator | testbed-node-2 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:50:55.249649 | orchestrator | 2026-03-09 00:50:55.249655 | orchestrator | 2026-03-09 00:50:55.249661 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:50:55.249668 | orchestrator | Monday 09 March 2026 00:50:51 +0000 (0:00:07.067) 0:00:23.423 ********** 2026-03-09 00:50:55.249679 | orchestrator | =============================================================================== 2026-03-09 00:50:55.249685 | orchestrator | memcached : Restart memcached container --------------------------------- 7.07s 2026-03-09 00:50:55.249691 | orchestrator | service-check-containers : Include tasks -------------------------------- 4.11s 2026-03-09 00:50:55.249697 | orchestrator | memcached : Copying over config.json files for services ----------------- 3.66s 2026-03-09 00:50:55.249703 | orchestrator | service-check-containers : memcached | Check containers ----------------- 1.88s 2026-03-09 00:50:55.249709 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.31s 2026-03-09 00:50:55.249714 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 1.31s 2026-03-09 00:50:55.249720 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.29s 2026-03-09 00:50:55.249725 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.23s 2026-03-09 00:50:55.249731 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.82s 2026-03-09 00:50:55.249736 | orchestrator | 2026-03-09 00:50:55 | INFO  | Task 2e7da1a5-2a08-4ea5-98dd-3594514f7b88 is in state SUCCESS 2026-03-09 00:50:55.250528 | orchestrator | 2026-03-09 00:50:55 | INFO  | Task 2ccac33e-508e-4477-be93-78e8669c3f31 is in state STARTED 2026-03-09 00:50:55.250554 | orchestrator | 2026-03-09 00:50:55 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:58.299991 | orchestrator | 2026-03-09 00:50:58 | INFO  | Task db179a46-7c13-4ccf-bc88-497404a40dbb is in state STARTED 2026-03-09 00:50:58.300780 | orchestrator | 2026-03-09 00:50:58 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:50:58.302332 | orchestrator | 2026-03-09 00:50:58 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:50:58.303014 | orchestrator | 2026-03-09 00:50:58 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:50:58.306429 | orchestrator | 2026-03-09 00:50:58 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:50:58.306486 | orchestrator | 2026-03-09 00:50:58 | INFO  | Task 2ccac33e-508e-4477-be93-78e8669c3f31 is in state STARTED 2026-03-09 00:50:58.306500 | orchestrator | 2026-03-09 00:50:58 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:01.370694 | orchestrator | 2026-03-09 00:51:01 | INFO  | Task db179a46-7c13-4ccf-bc88-497404a40dbb is in state STARTED 2026-03-09 00:51:01.377405 | orchestrator | 2026-03-09 00:51:01 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:51:01.377995 | orchestrator | 2026-03-09 00:51:01 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:51:01.378911 | orchestrator | 2026-03-09 00:51:01 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:51:01.379920 | orchestrator | 2026-03-09 00:51:01 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:51:01.383369 | orchestrator | 2026-03-09 00:51:01 | INFO  | Task 2ccac33e-508e-4477-be93-78e8669c3f31 is in state STARTED 2026-03-09 00:51:01.383406 | orchestrator | 2026-03-09 00:51:01 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:04.466918 | orchestrator | 2026-03-09 00:51:04 | INFO  | Task db179a46-7c13-4ccf-bc88-497404a40dbb is in state STARTED 2026-03-09 00:51:04.467140 | orchestrator | 2026-03-09 00:51:04 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:51:04.467179 | orchestrator | 2026-03-09 00:51:04 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:51:04.467811 | orchestrator | 2026-03-09 00:51:04 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:51:04.469061 | orchestrator | 2026-03-09 00:51:04 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:51:04.469604 | orchestrator | 2026-03-09 00:51:04 | INFO  | Task 2ccac33e-508e-4477-be93-78e8669c3f31 is in state STARTED 2026-03-09 00:51:04.470124 | orchestrator | 2026-03-09 00:51:04 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:07.578097 | orchestrator | 2026-03-09 00:51:07.578188 | orchestrator | 2026-03-09 00:51:07.578204 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 00:51:07.578215 | orchestrator | 2026-03-09 00:51:07.578226 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 00:51:07.578237 | orchestrator | Monday 09 March 2026 00:50:28 +0000 (0:00:00.769) 0:00:00.769 ********** 2026-03-09 00:51:07.578248 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:51:07.578259 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:51:07.578333 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:51:07.578345 | orchestrator | 2026-03-09 00:51:07.578356 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 00:51:07.578366 | orchestrator | Monday 09 March 2026 00:50:29 +0000 (0:00:00.873) 0:00:01.643 ********** 2026-03-09 00:51:07.578376 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-09 00:51:07.578387 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-09 00:51:07.578397 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-09 00:51:07.578407 | orchestrator | 2026-03-09 00:51:07.578417 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-09 00:51:07.578427 | orchestrator | 2026-03-09 00:51:07.578437 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-09 00:51:07.578448 | orchestrator | Monday 09 March 2026 00:50:30 +0000 (0:00:01.182) 0:00:02.825 ********** 2026-03-09 00:51:07.578458 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:51:07.578469 | orchestrator | 2026-03-09 00:51:07.578479 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-09 00:51:07.578489 | orchestrator | Monday 09 March 2026 00:50:31 +0000 (0:00:01.333) 0:00:04.159 ********** 2026-03-09 00:51:07.578503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-09 00:51:07.578547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-09 00:51:07.578572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-09 00:51:07.578587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-09 00:51:07.578617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-09 00:51:07.578630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-09 00:51:07.578642 | orchestrator | 2026-03-09 00:51:07.578654 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-09 00:51:07.578665 | orchestrator | Monday 09 March 2026 00:50:33 +0000 (0:00:02.230) 0:00:06.390 ********** 2026-03-09 00:51:07.578682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-09 00:51:07.578711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-09 00:51:07.578734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-09 00:51:07.578754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-09 00:51:07.578773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-09 00:51:07.578820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-09 00:51:07.578833 | orchestrator | 2026-03-09 00:51:07.578856 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-09 00:51:07.578869 | orchestrator | Monday 09 March 2026 00:50:38 +0000 (0:00:04.684) 0:00:11.074 ********** 2026-03-09 00:51:07.578880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-09 00:51:07.578902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-09 00:51:07.578914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-09 00:51:07.578932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-09 00:51:07.578945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-09 00:51:07.578965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-09 00:51:07.578975 | orchestrator | 2026-03-09 00:51:07.578985 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-03-09 00:51:07.578995 | orchestrator | Monday 09 March 2026 00:50:43 +0000 (0:00:04.998) 0:00:16.073 ********** 2026-03-09 00:51:07.579006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-09 00:51:07.579023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-09 00:51:07.579033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-09 00:51:07.579049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-09 00:51:07.579059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-09 00:51:07.579076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-09 00:51:07.579087 | orchestrator | 2026-03-09 00:51:07.579097 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-03-09 00:51:07.579113 | orchestrator | Monday 09 March 2026 00:50:47 +0000 (0:00:03.442) 0:00:19.516 ********** 2026-03-09 00:51:07.579139 | orchestrator | changed: [testbed-node-0] => { 2026-03-09 00:51:07.579160 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:51:07.579177 | orchestrator | } 2026-03-09 00:51:07.579194 | orchestrator | changed: [testbed-node-1] => { 2026-03-09 00:51:07.579211 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:51:07.579241 | orchestrator | } 2026-03-09 00:51:07.579260 | orchestrator | changed: [testbed-node-2] => { 2026-03-09 00:51:07.579300 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:51:07.579310 | orchestrator | } 2026-03-09 00:51:07.579320 | orchestrator | 2026-03-09 00:51:07.579330 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-09 00:51:07.579340 | orchestrator | Monday 09 March 2026 00:50:48 +0000 (0:00:00.970) 0:00:20.486 ********** 2026-03-09 00:51:07.579350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-03-09 00:51:07.579361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-03-09 00:51:07.579372 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:51:07.579388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-03-09 00:51:07.579399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-03-09 00:51:07.579409 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:51:07.579419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-03-09 00:51:07.579438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-03-09 00:51:07.579455 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:51:07.579466 | orchestrator | 2026-03-09 00:51:07.579475 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-09 00:51:07.579485 | orchestrator | Monday 09 March 2026 00:50:50 +0000 (0:00:01.986) 0:00:22.472 ********** 2026-03-09 00:51:07.579495 | orchestrator | 2026-03-09 00:51:07.579505 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-09 00:51:07.579514 | orchestrator | Monday 09 March 2026 00:50:50 +0000 (0:00:00.088) 0:00:22.560 ********** 2026-03-09 00:51:07.579524 | orchestrator | 2026-03-09 00:51:07.579534 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-09 00:51:07.579543 | orchestrator | Monday 09 March 2026 00:50:50 +0000 (0:00:00.165) 0:00:22.726 ********** 2026-03-09 00:51:07.579553 | orchestrator | 2026-03-09 00:51:07.579563 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-09 00:51:07.579572 | orchestrator | Monday 09 March 2026 00:50:50 +0000 (0:00:00.117) 0:00:22.843 ********** 2026-03-09 00:51:07.579582 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:51:07.579592 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:51:07.579602 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:51:07.579611 | orchestrator | 2026-03-09 00:51:07.579621 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-09 00:51:07.579630 | orchestrator | Monday 09 March 2026 00:50:55 +0000 (0:00:04.786) 0:00:27.630 ********** 2026-03-09 00:51:07.579640 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:51:07.579649 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:51:07.579659 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:51:07.579669 | orchestrator | 2026-03-09 00:51:07.579685 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:51:07.579707 | orchestrator | testbed-node-0 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:51:07.579731 | orchestrator | testbed-node-1 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:51:07.579747 | orchestrator | testbed-node-2 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:51:07.579764 | orchestrator | 2026-03-09 00:51:07.579782 | orchestrator | 2026-03-09 00:51:07.579798 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:51:07.579815 | orchestrator | Monday 09 March 2026 00:51:05 +0000 (0:00:10.766) 0:00:38.396 ********** 2026-03-09 00:51:07.579826 | orchestrator | =============================================================================== 2026-03-09 00:51:07.579835 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.77s 2026-03-09 00:51:07.579845 | orchestrator | redis : Copying over redis config files --------------------------------- 5.00s 2026-03-09 00:51:07.579861 | orchestrator | redis : Restart redis container ----------------------------------------- 4.79s 2026-03-09 00:51:07.579871 | orchestrator | redis : Copying over default config.json files -------------------------- 4.68s 2026-03-09 00:51:07.579881 | orchestrator | service-check-containers : redis | Check containers --------------------- 3.44s 2026-03-09 00:51:07.579891 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.23s 2026-03-09 00:51:07.579900 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.99s 2026-03-09 00:51:07.579910 | orchestrator | redis : include_tasks --------------------------------------------------- 1.33s 2026-03-09 00:51:07.579928 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.19s 2026-03-09 00:51:07.579938 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 0.97s 2026-03-09 00:51:07.579948 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.87s 2026-03-09 00:51:07.579958 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.37s 2026-03-09 00:51:07.579968 | orchestrator | 2026-03-09 00:51:07 | INFO  | Task db179a46-7c13-4ccf-bc88-497404a40dbb is in state SUCCESS 2026-03-09 00:51:07.579978 | orchestrator | 2026-03-09 00:51:07 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:51:07.583493 | orchestrator | 2026-03-09 00:51:07 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:51:07.586812 | orchestrator | 2026-03-09 00:51:07 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:51:07.590857 | orchestrator | 2026-03-09 00:51:07 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:51:07.591875 | orchestrator | 2026-03-09 00:51:07 | INFO  | Task 2ccac33e-508e-4477-be93-78e8669c3f31 is in state STARTED 2026-03-09 00:51:07.591923 | orchestrator | 2026-03-09 00:51:07 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:10.630362 | orchestrator | 2026-03-09 00:51:10 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:51:10.631572 | orchestrator | 2026-03-09 00:51:10 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:51:10.670913 | orchestrator | 2026-03-09 00:51:10 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:51:10.676026 | orchestrator | 2026-03-09 00:51:10 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:51:10.676938 | orchestrator | 2026-03-09 00:51:10 | INFO  | Task 2ccac33e-508e-4477-be93-78e8669c3f31 is in state STARTED 2026-03-09 00:51:10.676998 | orchestrator | 2026-03-09 00:51:10 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:13.788842 | orchestrator | 2026-03-09 00:51:13 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:51:13.789586 | orchestrator | 2026-03-09 00:51:13 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:51:13.790563 | orchestrator | 2026-03-09 00:51:13 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:51:13.791608 | orchestrator | 2026-03-09 00:51:13 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:51:13.792921 | orchestrator | 2026-03-09 00:51:13 | INFO  | Task 2ccac33e-508e-4477-be93-78e8669c3f31 is in state STARTED 2026-03-09 00:51:13.793054 | orchestrator | 2026-03-09 00:51:13 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:16.846825 | orchestrator | 2026-03-09 00:51:16 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:51:16.846996 | orchestrator | 2026-03-09 00:51:16 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:51:16.848096 | orchestrator | 2026-03-09 00:51:16 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:51:16.848575 | orchestrator | 2026-03-09 00:51:16 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:51:16.849527 | orchestrator | 2026-03-09 00:51:16 | INFO  | Task 2ccac33e-508e-4477-be93-78e8669c3f31 is in state STARTED 2026-03-09 00:51:16.849565 | orchestrator | 2026-03-09 00:51:16 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:19.885170 | orchestrator | 2026-03-09 00:51:19 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:51:19.889668 | orchestrator | 2026-03-09 00:51:19 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:51:19.890928 | orchestrator | 2026-03-09 00:51:19 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:51:19.893708 | orchestrator | 2026-03-09 00:51:19 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:51:19.894572 | orchestrator | 2026-03-09 00:51:19 | INFO  | Task 2ccac33e-508e-4477-be93-78e8669c3f31 is in state STARTED 2026-03-09 00:51:19.894614 | orchestrator | 2026-03-09 00:51:19 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:22.941948 | orchestrator | 2026-03-09 00:51:22 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:51:22.942237 | orchestrator | 2026-03-09 00:51:22 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:51:22.943384 | orchestrator | 2026-03-09 00:51:22 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:51:22.944363 | orchestrator | 2026-03-09 00:51:22 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:51:22.945536 | orchestrator | 2026-03-09 00:51:22 | INFO  | Task 2ccac33e-508e-4477-be93-78e8669c3f31 is in state STARTED 2026-03-09 00:51:22.945581 | orchestrator | 2026-03-09 00:51:22 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:25.988248 | orchestrator | 2026-03-09 00:51:25 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:51:25.988933 | orchestrator | 2026-03-09 00:51:25 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:51:25.989883 | orchestrator | 2026-03-09 00:51:25 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:51:25.990803 | orchestrator | 2026-03-09 00:51:25 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:51:25.994729 | orchestrator | 2026-03-09 00:51:25 | INFO  | Task 2ccac33e-508e-4477-be93-78e8669c3f31 is in state STARTED 2026-03-09 00:51:25.994790 | orchestrator | 2026-03-09 00:51:25 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:29.036395 | orchestrator | 2026-03-09 00:51:29 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:51:29.037507 | orchestrator | 2026-03-09 00:51:29 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:51:29.038752 | orchestrator | 2026-03-09 00:51:29 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:51:29.040156 | orchestrator | 2026-03-09 00:51:29 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:51:29.042220 | orchestrator | 2026-03-09 00:51:29 | INFO  | Task 2ccac33e-508e-4477-be93-78e8669c3f31 is in state STARTED 2026-03-09 00:51:29.042324 | orchestrator | 2026-03-09 00:51:29 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:32.117838 | orchestrator | 2026-03-09 00:51:32 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:51:32.118399 | orchestrator | 2026-03-09 00:51:32 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:51:32.119631 | orchestrator | 2026-03-09 00:51:32 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:51:32.120305 | orchestrator | 2026-03-09 00:51:32 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:51:32.121557 | orchestrator | 2026-03-09 00:51:32 | INFO  | Task 2ccac33e-508e-4477-be93-78e8669c3f31 is in state STARTED 2026-03-09 00:51:32.122005 | orchestrator | 2026-03-09 00:51:32 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:35.156468 | orchestrator | 2026-03-09 00:51:35 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:51:35.160545 | orchestrator | 2026-03-09 00:51:35 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:51:35.163309 | orchestrator | 2026-03-09 00:51:35 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:51:35.165160 | orchestrator | 2026-03-09 00:51:35 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:51:35.172180 | orchestrator | 2026-03-09 00:51:35 | INFO  | Task 2ccac33e-508e-4477-be93-78e8669c3f31 is in state STARTED 2026-03-09 00:51:35.172235 | orchestrator | 2026-03-09 00:51:35 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:38.226368 | orchestrator | 2026-03-09 00:51:38 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:51:38.226731 | orchestrator | 2026-03-09 00:51:38 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:51:38.227748 | orchestrator | 2026-03-09 00:51:38 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:51:38.228587 | orchestrator | 2026-03-09 00:51:38 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:51:38.229730 | orchestrator | 2026-03-09 00:51:38 | INFO  | Task 2ccac33e-508e-4477-be93-78e8669c3f31 is in state STARTED 2026-03-09 00:51:38.229763 | orchestrator | 2026-03-09 00:51:38 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:41.272501 | orchestrator | 2026-03-09 00:51:41 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:51:41.272844 | orchestrator | 2026-03-09 00:51:41 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:51:41.273844 | orchestrator | 2026-03-09 00:51:41 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:51:41.276179 | orchestrator | 2026-03-09 00:51:41 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:51:41.277464 | orchestrator | 2026-03-09 00:51:41 | INFO  | Task 2ccac33e-508e-4477-be93-78e8669c3f31 is in state STARTED 2026-03-09 00:51:41.278455 | orchestrator | 2026-03-09 00:51:41 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:44.343388 | orchestrator | 2026-03-09 00:51:44 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:51:44.344177 | orchestrator | 2026-03-09 00:51:44 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:51:44.345515 | orchestrator | 2026-03-09 00:51:44 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:51:44.346562 | orchestrator | 2026-03-09 00:51:44 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:51:44.348147 | orchestrator | 2026-03-09 00:51:44 | INFO  | Task 2ccac33e-508e-4477-be93-78e8669c3f31 is in state STARTED 2026-03-09 00:51:44.348233 | orchestrator | 2026-03-09 00:51:44 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:47.401564 | orchestrator | 2026-03-09 00:51:47 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:51:47.402749 | orchestrator | 2026-03-09 00:51:47 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:51:47.403632 | orchestrator | 2026-03-09 00:51:47 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:51:47.404627 | orchestrator | 2026-03-09 00:51:47 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:51:47.405784 | orchestrator | 2026-03-09 00:51:47 | INFO  | Task 2ccac33e-508e-4477-be93-78e8669c3f31 is in state STARTED 2026-03-09 00:51:47.405840 | orchestrator | 2026-03-09 00:51:47 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:50.457169 | orchestrator | 2026-03-09 00:51:50.457426 | orchestrator | 2026-03-09 00:51:50.457458 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 00:51:50.457485 | orchestrator | 2026-03-09 00:51:50.457507 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 00:51:50.457527 | orchestrator | Monday 09 March 2026 00:50:27 +0000 (0:00:00.398) 0:00:00.398 ********** 2026-03-09 00:51:50.457548 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:51:50.457571 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:51:50.457590 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:51:50.457610 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:51:50.457632 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:51:50.457653 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:51:50.457673 | orchestrator | 2026-03-09 00:51:50.457692 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 00:51:50.457711 | orchestrator | Monday 09 March 2026 00:50:28 +0000 (0:00:01.449) 0:00:01.847 ********** 2026-03-09 00:51:50.457725 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-09 00:51:50.457739 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-09 00:51:50.457752 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-09 00:51:50.457765 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-09 00:51:50.457777 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-09 00:51:50.457795 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-09 00:51:50.457814 | orchestrator | 2026-03-09 00:51:50.457834 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-09 00:51:50.457854 | orchestrator | 2026-03-09 00:51:50.457870 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-09 00:51:50.457884 | orchestrator | Monday 09 March 2026 00:50:29 +0000 (0:00:01.021) 0:00:02.869 ********** 2026-03-09 00:51:50.457897 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:51:50.457917 | orchestrator | 2026-03-09 00:51:50.457936 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-09 00:51:50.457977 | orchestrator | Monday 09 March 2026 00:50:31 +0000 (0:00:02.419) 0:00:05.289 ********** 2026-03-09 00:51:50.457997 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-09 00:51:50.458015 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-09 00:51:50.458095 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-09 00:51:50.458108 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-09 00:51:50.458124 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-09 00:51:50.458143 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-09 00:51:50.458163 | orchestrator | 2026-03-09 00:51:50.458182 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-09 00:51:50.458201 | orchestrator | Monday 09 March 2026 00:50:34 +0000 (0:00:02.750) 0:00:08.040 ********** 2026-03-09 00:51:50.458213 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-09 00:51:50.458225 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-09 00:51:50.458370 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-09 00:51:50.458413 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-09 00:51:50.458425 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-09 00:51:50.458436 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-09 00:51:50.458447 | orchestrator | 2026-03-09 00:51:50.458458 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-09 00:51:50.458468 | orchestrator | Monday 09 March 2026 00:50:37 +0000 (0:00:02.753) 0:00:10.794 ********** 2026-03-09 00:51:50.458479 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-09 00:51:50.458490 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:51:50.458502 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-09 00:51:50.458512 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:51:50.458523 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-09 00:51:50.458533 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:51:50.458544 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-09 00:51:50.458555 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:51:50.458565 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-09 00:51:50.458576 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:51:50.458586 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-09 00:51:50.458597 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:51:50.458607 | orchestrator | 2026-03-09 00:51:50.458618 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-09 00:51:50.458629 | orchestrator | Monday 09 March 2026 00:50:41 +0000 (0:00:03.821) 0:00:14.615 ********** 2026-03-09 00:51:50.458640 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:51:50.458651 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:51:50.458662 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:51:50.458672 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:51:50.458683 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:51:50.458694 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:51:50.458704 | orchestrator | 2026-03-09 00:51:50.458715 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-09 00:51:50.458726 | orchestrator | Monday 09 March 2026 00:50:42 +0000 (0:00:01.601) 0:00:16.217 ********** 2026-03-09 00:51:50.458762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:51:50.458785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:51:50.458813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:51:50.458853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:51:50.458873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:51:50.458890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:51:50.458926 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:51:50.458946 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:51:50.459004 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:51:50.459026 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:51:50.459046 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:51:50.459077 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:51:50.459092 | orchestrator | 2026-03-09 00:51:50.459103 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-09 00:51:50.459115 | orchestrator | Monday 09 March 2026 00:50:46 +0000 (0:00:03.914) 0:00:20.132 ********** 2026-03-09 00:51:50.459126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:51:50.459138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:51:50.459164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:51:50.459176 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:51:50.459187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:51:50.459216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:51:50.459255 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:51:50.459282 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:51:50.459294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:51:50.459306 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:51:50.459318 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:51:50.459339 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:51:50.459351 | orchestrator | 2026-03-09 00:51:50.459362 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-09 00:51:50.459373 | orchestrator | Monday 09 March 2026 00:50:52 +0000 (0:00:05.704) 0:00:25.836 ********** 2026-03-09 00:51:50.459384 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:51:50.459402 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:51:50.459413 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:51:50.459424 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:51:50.459435 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:51:50.459446 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:51:50.459457 | orchestrator | 2026-03-09 00:51:50.459475 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-03-09 00:51:50.459493 | orchestrator | Monday 09 March 2026 00:50:53 +0000 (0:00:01.317) 0:00:27.154 ********** 2026-03-09 00:51:50.459519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:51:50.459539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:51:50.459556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:51:50.459569 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:51:50.459589 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:51:50.459608 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:51:50.459624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:51:50.459637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:51:50.459648 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:51:50.459660 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:51:50.459679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:51:50.459697 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:51:50.459708 | orchestrator | 2026-03-09 00:51:50.459719 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-03-09 00:51:50.459731 | orchestrator | Monday 09 March 2026 00:50:56 +0000 (0:00:03.112) 0:00:30.267 ********** 2026-03-09 00:51:50.459742 | orchestrator | changed: [testbed-node-0] => { 2026-03-09 00:51:50.459753 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:51:50.459764 | orchestrator | } 2026-03-09 00:51:50.459775 | orchestrator | changed: [testbed-node-1] => { 2026-03-09 00:51:50.459785 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:51:50.459796 | orchestrator | } 2026-03-09 00:51:50.459807 | orchestrator | changed: [testbed-node-2] => { 2026-03-09 00:51:50.459818 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:51:50.459834 | orchestrator | } 2026-03-09 00:51:50.459845 | orchestrator | changed: [testbed-node-3] => { 2026-03-09 00:51:50.459856 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:51:50.459867 | orchestrator | } 2026-03-09 00:51:50.459878 | orchestrator | changed: [testbed-node-4] => { 2026-03-09 00:51:50.459889 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:51:50.459899 | orchestrator | } 2026-03-09 00:51:50.459910 | orchestrator | changed: [testbed-node-5] => { 2026-03-09 00:51:50.459921 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:51:50.459932 | orchestrator | } 2026-03-09 00:51:50.459943 | orchestrator | 2026-03-09 00:51:50.459956 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-09 00:51:50.459976 | orchestrator | Monday 09 March 2026 00:50:59 +0000 (0:00:02.394) 0:00:32.661 ********** 2026-03-09 00:51:50.459995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-09 00:51:50.460014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-09 00:51:50.460053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimension2026-03-09 00:51:50 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:51:50.460074 | orchestrator | 2026-03-09 00:51:50 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:51:50.460091 | orchestrator | 2026-03-09 00:51:50 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:51:50.460110 | orchestrator | 2026-03-09 00:51:50 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:51:50.460128 | orchestrator | 2026-03-09 00:51:50 | INFO  | Task 2ccac33e-508e-4477-be93-78e8669c3f31 is in state SUCCESS 2026-03-09 00:51:50.460147 | orchestrator | 2026-03-09 00:51:50 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:50.460166 | orchestrator | s': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-09 00:51:50.460187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-09 00:51:50.460207 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:51:50.460225 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:51:50.460314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-09 00:51:50.460339 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-09 00:51:50.460372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-09 00:51:50.460391 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:51:50.460414 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-09 00:51:50.460426 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:51:50.460437 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-09 00:51:50.460454 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-09 00:51:50.460466 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:51:50.460477 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-09 00:51:50.460489 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-09 00:51:50.460512 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:51:50.460523 | orchestrator | 2026-03-09 00:51:50.460534 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-09 00:51:50.460545 | orchestrator | Monday 09 March 2026 00:51:01 +0000 (0:00:02.619) 0:00:35.281 ********** 2026-03-09 00:51:50.460556 | orchestrator | 2026-03-09 00:51:50.460566 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-09 00:51:50.460577 | orchestrator | Monday 09 March 2026 00:51:02 +0000 (0:00:00.526) 0:00:35.807 ********** 2026-03-09 00:51:50.460588 | orchestrator | 2026-03-09 00:51:50.460599 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-09 00:51:50.460610 | orchestrator | Monday 09 March 2026 00:51:02 +0000 (0:00:00.240) 0:00:36.047 ********** 2026-03-09 00:51:50.460621 | orchestrator | 2026-03-09 00:51:50.460632 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-09 00:51:50.460642 | orchestrator | Monday 09 March 2026 00:51:02 +0000 (0:00:00.177) 0:00:36.225 ********** 2026-03-09 00:51:50.460653 | orchestrator | 2026-03-09 00:51:50.460664 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-09 00:51:50.460683 | orchestrator | Monday 09 March 2026 00:51:03 +0000 (0:00:00.345) 0:00:36.571 ********** 2026-03-09 00:51:50.460694 | orchestrator | 2026-03-09 00:51:50.460705 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-09 00:51:50.460716 | orchestrator | Monday 09 March 2026 00:51:03 +0000 (0:00:00.144) 0:00:36.716 ********** 2026-03-09 00:51:50.460726 | orchestrator | 2026-03-09 00:51:50.460743 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-09 00:51:50.460761 | orchestrator | Monday 09 March 2026 00:51:03 +0000 (0:00:00.165) 0:00:36.882 ********** 2026-03-09 00:51:50.460780 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:51:50.460798 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:51:50.460817 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:51:50.460829 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:51:50.460840 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:51:50.460850 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:51:50.460861 | orchestrator | 2026-03-09 00:51:50.460872 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-09 00:51:50.460883 | orchestrator | Monday 09 March 2026 00:51:15 +0000 (0:00:11.885) 0:00:48.767 ********** 2026-03-09 00:51:50.460894 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:51:50.460905 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:51:50.460916 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:51:50.460926 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:51:50.460942 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:51:50.460959 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:51:50.460976 | orchestrator | 2026-03-09 00:51:50.460994 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-09 00:51:50.461012 | orchestrator | Monday 09 March 2026 00:51:17 +0000 (0:00:02.327) 0:00:51.095 ********** 2026-03-09 00:51:50.461031 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:51:50.461048 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:51:50.461066 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:51:50.461077 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:51:50.461088 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:51:50.461099 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:51:50.461109 | orchestrator | 2026-03-09 00:51:50.461122 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-09 00:51:50.461154 | orchestrator | Monday 09 March 2026 00:51:27 +0000 (0:00:09.607) 0:01:00.703 ********** 2026-03-09 00:51:50.461172 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-09 00:51:50.461201 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-09 00:51:50.461220 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-09 00:51:50.461266 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-09 00:51:50.461281 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-09 00:51:50.461296 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-09 00:51:50.461311 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-09 00:51:50.461322 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-09 00:51:50.461333 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-09 00:51:50.461344 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-09 00:51:50.461355 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-09 00:51:50.461366 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-09 00:51:50.461377 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-09 00:51:50.461387 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-09 00:51:50.461398 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-09 00:51:50.461409 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-09 00:51:50.461419 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-09 00:51:50.461430 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-09 00:51:50.461441 | orchestrator | 2026-03-09 00:51:50.461452 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-09 00:51:50.461463 | orchestrator | Monday 09 March 2026 00:51:34 +0000 (0:00:07.007) 0:01:07.710 ********** 2026-03-09 00:51:50.461473 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-09 00:51:50.461484 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:51:50.461495 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-09 00:51:50.461506 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:51:50.461517 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-09 00:51:50.461527 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:51:50.461548 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-03-09 00:51:50.461559 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-03-09 00:51:50.461570 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-03-09 00:51:50.461581 | orchestrator | 2026-03-09 00:51:50.461591 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-09 00:51:50.461602 | orchestrator | Monday 09 March 2026 00:51:36 +0000 (0:00:02.396) 0:01:10.107 ********** 2026-03-09 00:51:50.461613 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-09 00:51:50.461632 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:51:50.461644 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-09 00:51:50.461654 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:51:50.461665 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-09 00:51:50.461676 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:51:50.461687 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-09 00:51:50.461698 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-09 00:51:50.461708 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-09 00:51:50.461719 | orchestrator | 2026-03-09 00:51:50.461730 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-09 00:51:50.461741 | orchestrator | Monday 09 March 2026 00:51:41 +0000 (0:00:04.310) 0:01:14.417 ********** 2026-03-09 00:51:50.461751 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:51:50.461762 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:51:50.461773 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:51:50.461784 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:51:50.461794 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:51:50.461805 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:51:50.461816 | orchestrator | 2026-03-09 00:51:50.461826 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:51:50.461839 | orchestrator | testbed-node-0 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-09 00:51:50.461850 | orchestrator | testbed-node-1 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-09 00:51:50.461867 | orchestrator | testbed-node-2 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-09 00:51:50.461878 | orchestrator | testbed-node-3 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-09 00:51:50.461889 | orchestrator | testbed-node-4 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-09 00:51:50.461900 | orchestrator | testbed-node-5 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-09 00:51:50.461910 | orchestrator | 2026-03-09 00:51:50.461921 | orchestrator | 2026-03-09 00:51:50.461932 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:51:50.461949 | orchestrator | Monday 09 March 2026 00:51:49 +0000 (0:00:08.143) 0:01:22.561 ********** 2026-03-09 00:51:50.461967 | orchestrator | =============================================================================== 2026-03-09 00:51:50.461987 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 17.75s 2026-03-09 00:51:50.462004 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.89s 2026-03-09 00:51:50.462062 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.01s 2026-03-09 00:51:50.462074 | orchestrator | openvswitch : Copying over config.json files for services --------------- 5.70s 2026-03-09 00:51:50.462085 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.31s 2026-03-09 00:51:50.462095 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 3.91s 2026-03-09 00:51:50.462106 | orchestrator | module-load : Drop module persistence ----------------------------------- 3.82s 2026-03-09 00:51:50.462117 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 3.11s 2026-03-09 00:51:50.462127 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.75s 2026-03-09 00:51:50.462138 | orchestrator | module-load : Load modules ---------------------------------------------- 2.75s 2026-03-09 00:51:50.462157 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.62s 2026-03-09 00:51:50.462167 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.42s 2026-03-09 00:51:50.462178 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.40s 2026-03-09 00:51:50.462189 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 2.39s 2026-03-09 00:51:50.462199 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.33s 2026-03-09 00:51:50.462213 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.60s 2026-03-09 00:51:50.462255 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.60s 2026-03-09 00:51:50.462276 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.45s 2026-03-09 00:51:50.462296 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.32s 2026-03-09 00:51:50.462326 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.02s 2026-03-09 00:51:53.498491 | orchestrator | 2026-03-09 00:51:53 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:51:53.500621 | orchestrator | 2026-03-09 00:51:53 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:51:53.501491 | orchestrator | 2026-03-09 00:51:53 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:51:53.502135 | orchestrator | 2026-03-09 00:51:53 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:51:53.508047 | orchestrator | 2026-03-09 00:51:53 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:51:53.508148 | orchestrator | 2026-03-09 00:51:53 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:56.592534 | orchestrator | 2026-03-09 00:51:56 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:51:56.594533 | orchestrator | 2026-03-09 00:51:56 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:51:56.596733 | orchestrator | 2026-03-09 00:51:56 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:51:56.596785 | orchestrator | 2026-03-09 00:51:56 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:51:56.600070 | orchestrator | 2026-03-09 00:51:56 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:51:56.600127 | orchestrator | 2026-03-09 00:51:56 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:59.634586 | orchestrator | 2026-03-09 00:51:59 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:51:59.635672 | orchestrator | 2026-03-09 00:51:59 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:51:59.636741 | orchestrator | 2026-03-09 00:51:59 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:51:59.640500 | orchestrator | 2026-03-09 00:51:59 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:51:59.642998 | orchestrator | 2026-03-09 00:51:59 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:51:59.643042 | orchestrator | 2026-03-09 00:51:59 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:02.687510 | orchestrator | 2026-03-09 00:52:02 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:52:02.688110 | orchestrator | 2026-03-09 00:52:02 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:52:02.688811 | orchestrator | 2026-03-09 00:52:02 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:52:02.691888 | orchestrator | 2026-03-09 00:52:02 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:52:02.694079 | orchestrator | 2026-03-09 00:52:02 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:52:02.695594 | orchestrator | 2026-03-09 00:52:02 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:05.735013 | orchestrator | 2026-03-09 00:52:05 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:52:05.735592 | orchestrator | 2026-03-09 00:52:05 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:52:05.736607 | orchestrator | 2026-03-09 00:52:05 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:52:05.737622 | orchestrator | 2026-03-09 00:52:05 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:52:05.738491 | orchestrator | 2026-03-09 00:52:05 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:52:05.738813 | orchestrator | 2026-03-09 00:52:05 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:08.781795 | orchestrator | 2026-03-09 00:52:08 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:52:08.781879 | orchestrator | 2026-03-09 00:52:08 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:52:08.783035 | orchestrator | 2026-03-09 00:52:08 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:52:08.784403 | orchestrator | 2026-03-09 00:52:08 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:52:08.787859 | orchestrator | 2026-03-09 00:52:08 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:52:08.787931 | orchestrator | 2026-03-09 00:52:08 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:11.895091 | orchestrator | 2026-03-09 00:52:11 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:52:11.895394 | orchestrator | 2026-03-09 00:52:11 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:52:11.898368 | orchestrator | 2026-03-09 00:52:11 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:52:11.898790 | orchestrator | 2026-03-09 00:52:11 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:52:11.899442 | orchestrator | 2026-03-09 00:52:11 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:52:11.899472 | orchestrator | 2026-03-09 00:52:11 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:14.941989 | orchestrator | 2026-03-09 00:52:14 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:52:14.942532 | orchestrator | 2026-03-09 00:52:14 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:52:14.943345 | orchestrator | 2026-03-09 00:52:14 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:52:14.944834 | orchestrator | 2026-03-09 00:52:14 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:52:14.945532 | orchestrator | 2026-03-09 00:52:14 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:52:14.945570 | orchestrator | 2026-03-09 00:52:14 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:18.286871 | orchestrator | 2026-03-09 00:52:18 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:52:18.287440 | orchestrator | 2026-03-09 00:52:18 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:52:18.288381 | orchestrator | 2026-03-09 00:52:18 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:52:18.289416 | orchestrator | 2026-03-09 00:52:18 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:52:18.290293 | orchestrator | 2026-03-09 00:52:18 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:52:18.290342 | orchestrator | 2026-03-09 00:52:18 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:21.461787 | orchestrator | 2026-03-09 00:52:21 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:52:21.513462 | orchestrator | 2026-03-09 00:52:21 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:52:21.513978 | orchestrator | 2026-03-09 00:52:21 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:52:21.514941 | orchestrator | 2026-03-09 00:52:21 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:52:21.515624 | orchestrator | 2026-03-09 00:52:21 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:52:21.515655 | orchestrator | 2026-03-09 00:52:21 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:24.576814 | orchestrator | 2026-03-09 00:52:24 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:52:24.577166 | orchestrator | 2026-03-09 00:52:24 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:52:24.577864 | orchestrator | 2026-03-09 00:52:24 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state STARTED 2026-03-09 00:52:24.580146 | orchestrator | 2026-03-09 00:52:24 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:52:24.580608 | orchestrator | 2026-03-09 00:52:24 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:52:24.580657 | orchestrator | 2026-03-09 00:52:24 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:27.616262 | orchestrator | 2026-03-09 00:52:27 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:52:27.617604 | orchestrator | 2026-03-09 00:52:27 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:52:27.619242 | orchestrator | 2026-03-09 00:52:27 | INFO  | Task 6683c958-0677-440f-b04e-7362f59e07b6 is in state SUCCESS 2026-03-09 00:52:27.622553 | orchestrator | 2026-03-09 00:52:27.622624 | orchestrator | 2026-03-09 00:52:27.622644 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-09 00:52:27.622661 | orchestrator | 2026-03-09 00:52:27.622678 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-09 00:52:27.622694 | orchestrator | Monday 09 March 2026 00:47:22 +0000 (0:00:00.265) 0:00:00.265 ********** 2026-03-09 00:52:27.622711 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:52:27.622727 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:52:27.622744 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:52:27.622761 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:52:27.622778 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:52:27.622795 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:52:27.622811 | orchestrator | 2026-03-09 00:52:27.622829 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-09 00:52:27.622846 | orchestrator | Monday 09 March 2026 00:47:23 +0000 (0:00:00.621) 0:00:00.886 ********** 2026-03-09 00:52:27.622863 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:52:27.622880 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:52:27.622923 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:52:27.622940 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:27.622958 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:27.622976 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:27.622994 | orchestrator | 2026-03-09 00:52:27.623010 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-09 00:52:27.623027 | orchestrator | Monday 09 March 2026 00:47:24 +0000 (0:00:00.587) 0:00:01.473 ********** 2026-03-09 00:52:27.623046 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:52:27.623065 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:52:27.623080 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:52:27.623096 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:27.623113 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:27.623129 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:27.623145 | orchestrator | 2026-03-09 00:52:27.623160 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-09 00:52:27.623178 | orchestrator | Monday 09 March 2026 00:47:24 +0000 (0:00:00.746) 0:00:02.220 ********** 2026-03-09 00:52:27.623224 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:52:27.623242 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:52:27.623258 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:52:27.623273 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:52:27.623287 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:52:27.623303 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:52:27.623319 | orchestrator | 2026-03-09 00:52:27.623335 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-09 00:52:27.623351 | orchestrator | Monday 09 March 2026 00:47:27 +0000 (0:00:02.990) 0:00:05.211 ********** 2026-03-09 00:52:27.623366 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:52:27.623382 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:52:27.623397 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:52:27.623414 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:52:27.623429 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:52:27.623448 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:52:27.623467 | orchestrator | 2026-03-09 00:52:27.623485 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-09 00:52:27.623503 | orchestrator | Monday 09 March 2026 00:47:28 +0000 (0:00:01.135) 0:00:06.347 ********** 2026-03-09 00:52:27.623520 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:52:27.623537 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:52:27.623553 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:52:27.623569 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:52:27.623585 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:52:27.623601 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:52:27.623617 | orchestrator | 2026-03-09 00:52:27.623635 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-09 00:52:27.623650 | orchestrator | Monday 09 March 2026 00:47:30 +0000 (0:00:01.832) 0:00:08.179 ********** 2026-03-09 00:52:27.623667 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:52:27.623684 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:52:27.623700 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:52:27.623717 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:27.623734 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:27.623752 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:27.623769 | orchestrator | 2026-03-09 00:52:27.623787 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-09 00:52:27.623804 | orchestrator | Monday 09 March 2026 00:47:32 +0000 (0:00:01.742) 0:00:09.922 ********** 2026-03-09 00:52:27.623822 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:52:27.623839 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:52:27.623852 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:52:27.624594 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:27.624646 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:27.624682 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:27.624698 | orchestrator | 2026-03-09 00:52:27.624714 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-09 00:52:27.624756 | orchestrator | Monday 09 March 2026 00:47:34 +0000 (0:00:01.647) 0:00:11.569 ********** 2026-03-09 00:52:27.624783 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-09 00:52:27.624796 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-09 00:52:27.624807 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:52:27.624821 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-09 00:52:27.624834 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-09 00:52:27.624847 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:52:27.624861 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-09 00:52:27.624873 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-09 00:52:27.624960 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:52:27.624979 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-09 00:52:27.625013 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-09 00:52:27.625026 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:27.625040 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-09 00:52:27.625053 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-09 00:52:27.625066 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:27.625078 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-09 00:52:27.625091 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-09 00:52:27.625103 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:27.625115 | orchestrator | 2026-03-09 00:52:27.625128 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-09 00:52:27.625141 | orchestrator | Monday 09 March 2026 00:47:35 +0000 (0:00:01.131) 0:00:12.700 ********** 2026-03-09 00:52:27.625154 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:52:27.625169 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:52:27.625182 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:52:27.625245 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:27.625261 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:27.625275 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:27.625289 | orchestrator | 2026-03-09 00:52:27.625303 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-09 00:52:27.625317 | orchestrator | Monday 09 March 2026 00:47:36 +0000 (0:00:01.667) 0:00:14.367 ********** 2026-03-09 00:52:27.625331 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:52:27.625344 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:52:27.625357 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:52:27.625369 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:52:27.625382 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:52:27.625395 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:52:27.625409 | orchestrator | 2026-03-09 00:52:27.625422 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-09 00:52:27.625435 | orchestrator | Monday 09 March 2026 00:47:38 +0000 (0:00:01.242) 0:00:15.609 ********** 2026-03-09 00:52:27.625448 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:52:27.625460 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:52:27.625472 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:52:27.625486 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:52:27.625501 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:52:27.625514 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:52:27.625527 | orchestrator | 2026-03-09 00:52:27.625541 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-09 00:52:27.625571 | orchestrator | Monday 09 March 2026 00:47:45 +0000 (0:00:07.250) 0:00:22.860 ********** 2026-03-09 00:52:27.625585 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:52:27.625598 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:52:27.625611 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:27.625625 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:52:27.625639 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:27.625653 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:27.625667 | orchestrator | 2026-03-09 00:52:27.625680 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-09 00:52:27.625693 | orchestrator | Monday 09 March 2026 00:47:48 +0000 (0:00:02.723) 0:00:25.583 ********** 2026-03-09 00:52:27.625705 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:52:27.625718 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:52:27.625732 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:52:27.625745 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:27.625757 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:27.625769 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:27.625782 | orchestrator | 2026-03-09 00:52:27.625795 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-09 00:52:27.625809 | orchestrator | Monday 09 March 2026 00:47:52 +0000 (0:00:04.297) 0:00:29.881 ********** 2026-03-09 00:52:27.625822 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:52:27.625835 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:52:27.625847 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:52:27.625861 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:27.625874 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:27.625887 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:27.625900 | orchestrator | 2026-03-09 00:52:27.625913 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-09 00:52:27.625927 | orchestrator | Monday 09 March 2026 00:47:53 +0000 (0:00:01.486) 0:00:31.368 ********** 2026-03-09 00:52:27.625941 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-09 00:52:27.625954 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-09 00:52:27.625967 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:52:27.625978 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-09 00:52:27.625992 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-09 00:52:27.626057 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:52:27.626076 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-09 00:52:27.626089 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-09 00:52:27.626103 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:52:27.626117 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-09 00:52:27.626130 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-09 00:52:27.626144 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:27.626159 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-09 00:52:27.626173 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-09 00:52:27.626187 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:27.626227 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-09 00:52:27.626240 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-09 00:52:27.626248 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:27.626256 | orchestrator | 2026-03-09 00:52:27.626264 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-09 00:52:27.626289 | orchestrator | Monday 09 March 2026 00:47:55 +0000 (0:00:01.888) 0:00:33.256 ********** 2026-03-09 00:52:27.626298 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:52:27.626306 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:52:27.626313 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:52:27.626332 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:27.626340 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:27.626348 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:27.626356 | orchestrator | 2026-03-09 00:52:27.626364 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-09 00:52:27.626373 | orchestrator | Monday 09 March 2026 00:47:57 +0000 (0:00:01.391) 0:00:34.648 ********** 2026-03-09 00:52:27.626381 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:52:27.626388 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:52:27.626396 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:52:27.626407 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:27.626420 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:27.626433 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:27.626444 | orchestrator | 2026-03-09 00:52:27.626454 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-09 00:52:27.626464 | orchestrator | 2026-03-09 00:52:27.626475 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-09 00:52:27.626486 | orchestrator | Monday 09 March 2026 00:47:59 +0000 (0:00:02.225) 0:00:36.873 ********** 2026-03-09 00:52:27.626496 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:52:27.626507 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:52:27.626518 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:52:27.626528 | orchestrator | 2026-03-09 00:52:27.626540 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-09 00:52:27.626551 | orchestrator | Monday 09 March 2026 00:48:01 +0000 (0:00:02.149) 0:00:39.023 ********** 2026-03-09 00:52:27.626561 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:52:27.626572 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:52:27.626582 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:52:27.626593 | orchestrator | 2026-03-09 00:52:27.626604 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-09 00:52:27.626615 | orchestrator | Monday 09 March 2026 00:48:04 +0000 (0:00:02.891) 0:00:41.914 ********** 2026-03-09 00:52:27.626625 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:52:27.626636 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:52:27.626645 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:52:27.626656 | orchestrator | 2026-03-09 00:52:27.626667 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-09 00:52:27.626677 | orchestrator | Monday 09 March 2026 00:48:06 +0000 (0:00:01.524) 0:00:43.439 ********** 2026-03-09 00:52:27.626687 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:52:27.626698 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:52:27.626708 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:52:27.626718 | orchestrator | 2026-03-09 00:52:27.626730 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-09 00:52:27.626742 | orchestrator | Monday 09 March 2026 00:48:07 +0000 (0:00:01.524) 0:00:44.963 ********** 2026-03-09 00:52:27.626752 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:27.626762 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:27.626773 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:27.626784 | orchestrator | 2026-03-09 00:52:27.626795 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-09 00:52:27.626807 | orchestrator | Monday 09 March 2026 00:48:08 +0000 (0:00:01.126) 0:00:46.090 ********** 2026-03-09 00:52:27.626818 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:52:27.626829 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:52:27.626837 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:52:27.626843 | orchestrator | 2026-03-09 00:52:27.626850 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-09 00:52:27.626857 | orchestrator | Monday 09 March 2026 00:48:11 +0000 (0:00:02.442) 0:00:48.533 ********** 2026-03-09 00:52:27.626864 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:52:27.626870 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:52:27.626877 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:52:27.626893 | orchestrator | 2026-03-09 00:52:27.626904 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-09 00:52:27.626915 | orchestrator | Monday 09 March 2026 00:48:13 +0000 (0:00:02.699) 0:00:51.232 ********** 2026-03-09 00:52:27.626925 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:52:27.626936 | orchestrator | 2026-03-09 00:52:27.626947 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-09 00:52:27.626959 | orchestrator | Monday 09 March 2026 00:48:15 +0000 (0:00:01.842) 0:00:53.075 ********** 2026-03-09 00:52:27.626970 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:52:27.626981 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:52:27.626992 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:52:27.627000 | orchestrator | 2026-03-09 00:52:27.627007 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-09 00:52:27.627020 | orchestrator | Monday 09 March 2026 00:48:18 +0000 (0:00:03.254) 0:00:56.330 ********** 2026-03-09 00:52:27.627027 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:27.627034 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:27.627041 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:52:27.627047 | orchestrator | 2026-03-09 00:52:27.627054 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-09 00:52:27.627060 | orchestrator | Monday 09 March 2026 00:48:19 +0000 (0:00:00.881) 0:00:57.211 ********** 2026-03-09 00:52:27.627068 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:52:27.627079 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:27.627090 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:27.627101 | orchestrator | 2026-03-09 00:52:27.627111 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-09 00:52:27.627122 | orchestrator | Monday 09 March 2026 00:48:21 +0000 (0:00:01.662) 0:00:58.874 ********** 2026-03-09 00:52:27.627134 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:27.627144 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:27.627156 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:52:27.627167 | orchestrator | 2026-03-09 00:52:27.627178 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-09 00:52:27.627228 | orchestrator | Monday 09 March 2026 00:48:23 +0000 (0:00:02.439) 0:01:01.313 ********** 2026-03-09 00:52:27.627242 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:27.627251 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:27.627258 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:27.627264 | orchestrator | 2026-03-09 00:52:27.627271 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-09 00:52:27.627278 | orchestrator | Monday 09 March 2026 00:48:25 +0000 (0:00:01.580) 0:01:02.893 ********** 2026-03-09 00:52:27.627284 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:27.627291 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:27.627298 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:27.627304 | orchestrator | 2026-03-09 00:52:27.627311 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-09 00:52:27.627318 | orchestrator | Monday 09 March 2026 00:48:26 +0000 (0:00:00.681) 0:01:03.575 ********** 2026-03-09 00:52:27.627324 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:52:27.627331 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:52:27.627338 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:52:27.627344 | orchestrator | 2026-03-09 00:52:27.627351 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-09 00:52:27.627358 | orchestrator | Monday 09 March 2026 00:48:28 +0000 (0:00:02.125) 0:01:05.700 ********** 2026-03-09 00:52:27.627364 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:52:27.627371 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:52:27.627378 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:52:27.627384 | orchestrator | 2026-03-09 00:52:27.627391 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-09 00:52:27.627409 | orchestrator | Monday 09 March 2026 00:48:31 +0000 (0:00:02.785) 0:01:08.485 ********** 2026-03-09 00:52:27.627416 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:52:27.627422 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:52:27.627429 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:52:27.627435 | orchestrator | 2026-03-09 00:52:27.627442 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-09 00:52:27.627449 | orchestrator | Monday 09 March 2026 00:48:31 +0000 (0:00:00.680) 0:01:09.166 ********** 2026-03-09 00:52:27.627456 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-09 00:52:27.627464 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-09 00:52:27.627471 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-09 00:52:27.627478 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-09 00:52:27.627484 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-09 00:52:27.627491 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-09 00:52:27.627498 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-09 00:52:27.627505 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-09 00:52:27.627512 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-09 00:52:27.627518 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-09 00:52:27.627525 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-09 00:52:27.627532 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-09 00:52:27.627538 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:52:27.627545 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:52:27.627552 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:52:27.627559 | orchestrator | 2026-03-09 00:52:27.627570 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-09 00:52:27.627577 | orchestrator | Monday 09 March 2026 00:49:15 +0000 (0:00:43.329) 0:01:52.495 ********** 2026-03-09 00:52:27.627584 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:27.627594 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:27.627605 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:27.627615 | orchestrator | 2026-03-09 00:52:27.627625 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-09 00:52:27.627637 | orchestrator | Monday 09 March 2026 00:49:15 +0000 (0:00:00.490) 0:01:52.986 ********** 2026-03-09 00:52:27.627648 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:52:27.627659 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:52:27.627669 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:52:27.627679 | orchestrator | 2026-03-09 00:52:27.627685 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-09 00:52:27.627693 | orchestrator | Monday 09 March 2026 00:49:16 +0000 (0:00:01.259) 0:01:54.245 ********** 2026-03-09 00:52:27.627699 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:52:27.627706 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:52:27.627719 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:52:27.627725 | orchestrator | 2026-03-09 00:52:27.627738 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-09 00:52:27.627745 | orchestrator | Monday 09 March 2026 00:49:18 +0000 (0:00:01.729) 0:01:55.975 ********** 2026-03-09 00:52:27.627752 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:52:27.627758 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:52:27.627765 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:52:27.627772 | orchestrator | 2026-03-09 00:52:27.627778 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-09 00:52:27.627785 | orchestrator | Monday 09 March 2026 00:49:44 +0000 (0:00:26.228) 0:02:22.203 ********** 2026-03-09 00:52:27.627792 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:52:27.627798 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:52:27.627805 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:52:27.627811 | orchestrator | 2026-03-09 00:52:27.627818 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-09 00:52:27.627825 | orchestrator | Monday 09 March 2026 00:49:45 +0000 (0:00:00.983) 0:02:23.188 ********** 2026-03-09 00:52:27.627831 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:52:27.627838 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:52:27.627844 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:52:27.627851 | orchestrator | 2026-03-09 00:52:27.627857 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-09 00:52:27.627864 | orchestrator | Monday 09 March 2026 00:49:46 +0000 (0:00:00.743) 0:02:23.931 ********** 2026-03-09 00:52:27.627871 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:52:27.627877 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:52:27.627884 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:52:27.627890 | orchestrator | 2026-03-09 00:52:27.627901 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-09 00:52:27.627913 | orchestrator | Monday 09 March 2026 00:49:47 +0000 (0:00:00.705) 0:02:24.637 ********** 2026-03-09 00:52:27.627924 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:52:27.627935 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:52:27.627947 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:52:27.627958 | orchestrator | 2026-03-09 00:52:27.627970 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-09 00:52:27.627978 | orchestrator | Monday 09 March 2026 00:49:48 +0000 (0:00:00.987) 0:02:25.625 ********** 2026-03-09 00:52:27.627985 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:52:27.627991 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:52:27.627998 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:52:27.628004 | orchestrator | 2026-03-09 00:52:27.628011 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-09 00:52:27.628018 | orchestrator | Monday 09 March 2026 00:49:48 +0000 (0:00:00.342) 0:02:25.968 ********** 2026-03-09 00:52:27.628025 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:52:27.628031 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:52:27.628038 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:52:27.628045 | orchestrator | 2026-03-09 00:52:27.628052 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-09 00:52:27.628058 | orchestrator | Monday 09 March 2026 00:49:49 +0000 (0:00:00.698) 0:02:26.666 ********** 2026-03-09 00:52:27.628065 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:52:27.628072 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:52:27.628078 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:52:27.628085 | orchestrator | 2026-03-09 00:52:27.628092 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-09 00:52:27.628098 | orchestrator | Monday 09 March 2026 00:49:49 +0000 (0:00:00.682) 0:02:27.349 ********** 2026-03-09 00:52:27.628105 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:52:27.628112 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:52:27.628118 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:52:27.628125 | orchestrator | 2026-03-09 00:52:27.628138 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-09 00:52:27.628145 | orchestrator | Monday 09 March 2026 00:49:51 +0000 (0:00:01.197) 0:02:28.546 ********** 2026-03-09 00:52:27.628151 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:52:27.628158 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:52:27.628165 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:52:27.628172 | orchestrator | 2026-03-09 00:52:27.628178 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-09 00:52:27.628185 | orchestrator | Monday 09 March 2026 00:49:52 +0000 (0:00:00.943) 0:02:29.490 ********** 2026-03-09 00:52:27.628192 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:27.628263 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:27.628275 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:27.628287 | orchestrator | 2026-03-09 00:52:27.628298 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-09 00:52:27.628309 | orchestrator | Monday 09 March 2026 00:49:52 +0000 (0:00:00.308) 0:02:29.798 ********** 2026-03-09 00:52:27.628320 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:27.628332 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:27.628342 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:27.628351 | orchestrator | 2026-03-09 00:52:27.628360 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-09 00:52:27.628377 | orchestrator | Monday 09 March 2026 00:49:52 +0000 (0:00:00.312) 0:02:30.111 ********** 2026-03-09 00:52:27.628388 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:52:27.628397 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:52:27.628407 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:52:27.628417 | orchestrator | 2026-03-09 00:52:27.628427 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-09 00:52:27.628439 | orchestrator | Monday 09 March 2026 00:49:53 +0000 (0:00:00.956) 0:02:31.068 ********** 2026-03-09 00:52:27.628450 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:52:27.628460 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:52:27.628471 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:52:27.628481 | orchestrator | 2026-03-09 00:52:27.628492 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-09 00:52:27.628504 | orchestrator | Monday 09 March 2026 00:49:54 +0000 (0:00:00.892) 0:02:31.960 ********** 2026-03-09 00:52:27.628516 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-09 00:52:27.628538 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-09 00:52:27.628551 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-09 00:52:27.628559 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-09 00:52:27.628566 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-09 00:52:27.628572 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-09 00:52:27.628579 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-09 00:52:27.628586 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-09 00:52:27.628593 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-09 00:52:27.628599 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-09 00:52:27.628606 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-09 00:52:27.628613 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-09 00:52:27.628619 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-09 00:52:27.628633 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-09 00:52:27.628640 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-09 00:52:27.628647 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-09 00:52:27.628653 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-09 00:52:27.628660 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-09 00:52:27.628667 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-09 00:52:27.628673 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-09 00:52:27.628680 | orchestrator | 2026-03-09 00:52:27.628686 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-09 00:52:27.628693 | orchestrator | 2026-03-09 00:52:27.628700 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-09 00:52:27.628706 | orchestrator | Monday 09 March 2026 00:49:57 +0000 (0:00:03.313) 0:02:35.274 ********** 2026-03-09 00:52:27.628713 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:52:27.628719 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:52:27.628726 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:52:27.628732 | orchestrator | 2026-03-09 00:52:27.628739 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-09 00:52:27.628746 | orchestrator | Monday 09 March 2026 00:49:58 +0000 (0:00:00.669) 0:02:35.943 ********** 2026-03-09 00:52:27.628752 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:52:27.628759 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:52:27.628766 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:52:27.628772 | orchestrator | 2026-03-09 00:52:27.628779 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-09 00:52:27.628786 | orchestrator | Monday 09 March 2026 00:49:59 +0000 (0:00:00.730) 0:02:36.673 ********** 2026-03-09 00:52:27.628792 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:52:27.628799 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:52:27.628805 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:52:27.628812 | orchestrator | 2026-03-09 00:52:27.628819 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-09 00:52:27.628825 | orchestrator | Monday 09 March 2026 00:49:59 +0000 (0:00:00.371) 0:02:37.045 ********** 2026-03-09 00:52:27.628832 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:52:27.628839 | orchestrator | 2026-03-09 00:52:27.628845 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-09 00:52:27.628852 | orchestrator | Monday 09 March 2026 00:50:00 +0000 (0:00:00.772) 0:02:37.817 ********** 2026-03-09 00:52:27.628859 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:52:27.628865 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:52:27.628872 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:52:27.628878 | orchestrator | 2026-03-09 00:52:27.628889 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-09 00:52:27.628900 | orchestrator | Monday 09 March 2026 00:50:00 +0000 (0:00:00.432) 0:02:38.250 ********** 2026-03-09 00:52:27.628911 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:52:27.628922 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:52:27.628932 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:52:27.628943 | orchestrator | 2026-03-09 00:52:27.628955 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-09 00:52:27.628966 | orchestrator | Monday 09 March 2026 00:50:01 +0000 (0:00:00.363) 0:02:38.614 ********** 2026-03-09 00:52:27.628978 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:52:27.628985 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:52:27.628997 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:52:27.629004 | orchestrator | 2026-03-09 00:52:27.629011 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-09 00:52:27.629017 | orchestrator | Monday 09 March 2026 00:50:01 +0000 (0:00:00.335) 0:02:38.949 ********** 2026-03-09 00:52:27.629024 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:52:27.629031 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:52:27.629037 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:52:27.629044 | orchestrator | 2026-03-09 00:52:27.629056 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-09 00:52:27.629063 | orchestrator | Monday 09 March 2026 00:50:02 +0000 (0:00:00.937) 0:02:39.887 ********** 2026-03-09 00:52:27.629070 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:52:27.629076 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:52:27.629083 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:52:27.629089 | orchestrator | 2026-03-09 00:52:27.629096 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-09 00:52:27.629103 | orchestrator | Monday 09 March 2026 00:50:03 +0000 (0:00:01.331) 0:02:41.219 ********** 2026-03-09 00:52:27.629109 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:52:27.629116 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:52:27.629122 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:52:27.629129 | orchestrator | 2026-03-09 00:52:27.629135 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-09 00:52:27.629142 | orchestrator | Monday 09 March 2026 00:50:05 +0000 (0:00:01.472) 0:02:42.691 ********** 2026-03-09 00:52:27.629149 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:52:27.629155 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:52:27.629162 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:52:27.629168 | orchestrator | 2026-03-09 00:52:27.629175 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-09 00:52:27.629182 | orchestrator | 2026-03-09 00:52:27.629188 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-09 00:52:27.629217 | orchestrator | Monday 09 March 2026 00:50:16 +0000 (0:00:11.269) 0:02:53.960 ********** 2026-03-09 00:52:27.629225 | orchestrator | ok: [testbed-manager] 2026-03-09 00:52:27.629232 | orchestrator | 2026-03-09 00:52:27.629238 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-09 00:52:27.629245 | orchestrator | Monday 09 March 2026 00:50:17 +0000 (0:00:01.054) 0:02:55.015 ********** 2026-03-09 00:52:27.629252 | orchestrator | changed: [testbed-manager] 2026-03-09 00:52:27.629258 | orchestrator | 2026-03-09 00:52:27.629265 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-09 00:52:27.629272 | orchestrator | Monday 09 March 2026 00:50:18 +0000 (0:00:00.671) 0:02:55.687 ********** 2026-03-09 00:52:27.629278 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-09 00:52:27.629286 | orchestrator | 2026-03-09 00:52:27.629292 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-09 00:52:27.629299 | orchestrator | Monday 09 March 2026 00:50:18 +0000 (0:00:00.714) 0:02:56.402 ********** 2026-03-09 00:52:27.629306 | orchestrator | changed: [testbed-manager] 2026-03-09 00:52:27.629312 | orchestrator | 2026-03-09 00:52:27.629319 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-09 00:52:27.629326 | orchestrator | Monday 09 March 2026 00:50:20 +0000 (0:00:01.218) 0:02:57.621 ********** 2026-03-09 00:52:27.629332 | orchestrator | changed: [testbed-manager] 2026-03-09 00:52:27.629339 | orchestrator | 2026-03-09 00:52:27.629349 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-09 00:52:27.629360 | orchestrator | Monday 09 March 2026 00:50:21 +0000 (0:00:00.969) 0:02:58.591 ********** 2026-03-09 00:52:27.629371 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-09 00:52:27.629382 | orchestrator | 2026-03-09 00:52:27.629392 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-09 00:52:27.629404 | orchestrator | Monday 09 March 2026 00:50:23 +0000 (0:00:02.319) 0:03:00.911 ********** 2026-03-09 00:52:27.629427 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-09 00:52:27.629439 | orchestrator | 2026-03-09 00:52:27.629451 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-09 00:52:27.629463 | orchestrator | Monday 09 March 2026 00:50:24 +0000 (0:00:01.114) 0:03:02.025 ********** 2026-03-09 00:52:27.629473 | orchestrator | changed: [testbed-manager] 2026-03-09 00:52:27.629485 | orchestrator | 2026-03-09 00:52:27.629492 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-09 00:52:27.629499 | orchestrator | Monday 09 March 2026 00:50:25 +0000 (0:00:00.853) 0:03:02.879 ********** 2026-03-09 00:52:27.629506 | orchestrator | changed: [testbed-manager] 2026-03-09 00:52:27.629512 | orchestrator | 2026-03-09 00:52:27.629519 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-09 00:52:27.629526 | orchestrator | 2026-03-09 00:52:27.629532 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-09 00:52:27.629539 | orchestrator | Monday 09 March 2026 00:50:25 +0000 (0:00:00.467) 0:03:03.346 ********** 2026-03-09 00:52:27.629546 | orchestrator | ok: [testbed-manager] 2026-03-09 00:52:27.629553 | orchestrator | 2026-03-09 00:52:27.629559 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-09 00:52:27.629566 | orchestrator | Monday 09 March 2026 00:50:26 +0000 (0:00:00.128) 0:03:03.474 ********** 2026-03-09 00:52:27.629572 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-09 00:52:27.629579 | orchestrator | 2026-03-09 00:52:27.629602 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-09 00:52:27.629609 | orchestrator | Monday 09 March 2026 00:50:26 +0000 (0:00:00.233) 0:03:03.708 ********** 2026-03-09 00:52:27.629616 | orchestrator | ok: [testbed-manager] 2026-03-09 00:52:27.629623 | orchestrator | 2026-03-09 00:52:27.629629 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-09 00:52:27.629636 | orchestrator | Monday 09 March 2026 00:50:27 +0000 (0:00:00.949) 0:03:04.657 ********** 2026-03-09 00:52:27.629642 | orchestrator | ok: [testbed-manager] 2026-03-09 00:52:27.629649 | orchestrator | 2026-03-09 00:52:27.629656 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-09 00:52:27.629662 | orchestrator | Monday 09 March 2026 00:50:29 +0000 (0:00:01.962) 0:03:06.620 ********** 2026-03-09 00:52:27.629669 | orchestrator | changed: [testbed-manager] 2026-03-09 00:52:27.629676 | orchestrator | 2026-03-09 00:52:27.629682 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-09 00:52:27.629689 | orchestrator | Monday 09 March 2026 00:50:30 +0000 (0:00:01.084) 0:03:07.705 ********** 2026-03-09 00:52:27.629696 | orchestrator | ok: [testbed-manager] 2026-03-09 00:52:27.629703 | orchestrator | 2026-03-09 00:52:27.629716 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-09 00:52:27.629723 | orchestrator | Monday 09 March 2026 00:50:30 +0000 (0:00:00.549) 0:03:08.254 ********** 2026-03-09 00:52:27.629729 | orchestrator | changed: [testbed-manager] 2026-03-09 00:52:27.629736 | orchestrator | 2026-03-09 00:52:27.629742 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-09 00:52:27.629749 | orchestrator | Monday 09 March 2026 00:50:41 +0000 (0:00:11.085) 0:03:19.339 ********** 2026-03-09 00:52:27.629756 | orchestrator | changed: [testbed-manager] 2026-03-09 00:52:27.629762 | orchestrator | 2026-03-09 00:52:27.629769 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-09 00:52:27.629775 | orchestrator | Monday 09 March 2026 00:50:59 +0000 (0:00:17.830) 0:03:37.169 ********** 2026-03-09 00:52:27.629782 | orchestrator | ok: [testbed-manager] 2026-03-09 00:52:27.629789 | orchestrator | 2026-03-09 00:52:27.629795 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-09 00:52:27.629802 | orchestrator | 2026-03-09 00:52:27.629809 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-09 00:52:27.629822 | orchestrator | Monday 09 March 2026 00:51:00 +0000 (0:00:00.620) 0:03:37.789 ********** 2026-03-09 00:52:27.629829 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:52:27.629836 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:52:27.629843 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:52:27.629850 | orchestrator | 2026-03-09 00:52:27.629856 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-09 00:52:27.629863 | orchestrator | Monday 09 March 2026 00:51:00 +0000 (0:00:00.480) 0:03:38.270 ********** 2026-03-09 00:52:27.629870 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:27.629876 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:27.629883 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:27.629890 | orchestrator | 2026-03-09 00:52:27.629901 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-09 00:52:27.629912 | orchestrator | Monday 09 March 2026 00:51:01 +0000 (0:00:00.414) 0:03:38.684 ********** 2026-03-09 00:52:27.629923 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:52:27.629934 | orchestrator | 2026-03-09 00:52:27.629945 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-09 00:52:27.629956 | orchestrator | Monday 09 March 2026 00:51:02 +0000 (0:00:01.092) 0:03:39.776 ********** 2026-03-09 00:52:27.629968 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-09 00:52:27.629980 | orchestrator | 2026-03-09 00:52:27.629992 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-09 00:52:27.629999 | orchestrator | Monday 09 March 2026 00:51:03 +0000 (0:00:01.160) 0:03:40.937 ********** 2026-03-09 00:52:27.630006 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 00:52:27.630041 | orchestrator | 2026-03-09 00:52:27.630050 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-09 00:52:27.630057 | orchestrator | Monday 09 March 2026 00:51:04 +0000 (0:00:01.429) 0:03:42.366 ********** 2026-03-09 00:52:27.630064 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:27.630071 | orchestrator | 2026-03-09 00:52:27.630078 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-09 00:52:27.630085 | orchestrator | Monday 09 March 2026 00:51:05 +0000 (0:00:00.105) 0:03:42.472 ********** 2026-03-09 00:52:27.630091 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 00:52:27.630098 | orchestrator | 2026-03-09 00:52:27.630104 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-09 00:52:27.630111 | orchestrator | Monday 09 March 2026 00:51:06 +0000 (0:00:00.980) 0:03:43.452 ********** 2026-03-09 00:52:27.630118 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:27.630124 | orchestrator | 2026-03-09 00:52:27.630131 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-09 00:52:27.630138 | orchestrator | Monday 09 March 2026 00:51:06 +0000 (0:00:00.125) 0:03:43.578 ********** 2026-03-09 00:52:27.630145 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:27.630151 | orchestrator | 2026-03-09 00:52:27.630158 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-09 00:52:27.630164 | orchestrator | Monday 09 March 2026 00:51:06 +0000 (0:00:00.105) 0:03:43.683 ********** 2026-03-09 00:52:27.630171 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:27.630178 | orchestrator | 2026-03-09 00:52:27.630184 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-09 00:52:27.630191 | orchestrator | Monday 09 March 2026 00:51:06 +0000 (0:00:00.117) 0:03:43.801 ********** 2026-03-09 00:52:27.630244 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:27.630252 | orchestrator | 2026-03-09 00:52:27.630258 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-09 00:52:27.630265 | orchestrator | Monday 09 March 2026 00:51:06 +0000 (0:00:00.123) 0:03:43.924 ********** 2026-03-09 00:52:27.630276 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-09 00:52:27.630283 | orchestrator | 2026-03-09 00:52:27.630295 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-09 00:52:27.630302 | orchestrator | Monday 09 March 2026 00:51:12 +0000 (0:00:05.912) 0:03:49.837 ********** 2026-03-09 00:52:27.630308 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-09 00:52:27.630315 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-03-09 00:52:27.630322 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-09 00:52:27.630329 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-09 00:52:27.630335 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-09 00:52:27.630342 | orchestrator | 2026-03-09 00:52:27.630349 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-09 00:52:27.630356 | orchestrator | Monday 09 March 2026 00:51:55 +0000 (0:00:43.342) 0:04:33.179 ********** 2026-03-09 00:52:27.630369 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 00:52:27.630376 | orchestrator | 2026-03-09 00:52:27.630382 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-09 00:52:27.630389 | orchestrator | Monday 09 March 2026 00:51:57 +0000 (0:00:01.276) 0:04:34.455 ********** 2026-03-09 00:52:27.630396 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-09 00:52:27.630403 | orchestrator | 2026-03-09 00:52:27.630409 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-09 00:52:27.630416 | orchestrator | Monday 09 March 2026 00:51:59 +0000 (0:00:02.010) 0:04:36.466 ********** 2026-03-09 00:52:27.630423 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-09 00:52:27.630430 | orchestrator | 2026-03-09 00:52:27.630437 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-09 00:52:27.630443 | orchestrator | Monday 09 March 2026 00:52:00 +0000 (0:00:01.307) 0:04:37.774 ********** 2026-03-09 00:52:27.630450 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:27.630457 | orchestrator | 2026-03-09 00:52:27.630464 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-09 00:52:27.630473 | orchestrator | Monday 09 March 2026 00:52:00 +0000 (0:00:00.145) 0:04:37.920 ********** 2026-03-09 00:52:27.630486 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-09 00:52:27.630499 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-09 00:52:27.630511 | orchestrator | 2026-03-09 00:52:27.630522 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-09 00:52:27.630534 | orchestrator | Monday 09 March 2026 00:52:02 +0000 (0:00:02.339) 0:04:40.259 ********** 2026-03-09 00:52:27.630546 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:27.630558 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:27.630569 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:27.630581 | orchestrator | 2026-03-09 00:52:27.630593 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-09 00:52:27.630604 | orchestrator | Monday 09 March 2026 00:52:03 +0000 (0:00:00.407) 0:04:40.667 ********** 2026-03-09 00:52:27.630615 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:52:27.630622 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:52:27.630628 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:52:27.630635 | orchestrator | 2026-03-09 00:52:27.630642 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-09 00:52:27.630648 | orchestrator | 2026-03-09 00:52:27.630655 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-09 00:52:27.630661 | orchestrator | Monday 09 March 2026 00:52:04 +0000 (0:00:01.306) 0:04:41.973 ********** 2026-03-09 00:52:27.630668 | orchestrator | ok: [testbed-manager] 2026-03-09 00:52:27.630675 | orchestrator | 2026-03-09 00:52:27.630681 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-09 00:52:27.630688 | orchestrator | Monday 09 March 2026 00:52:04 +0000 (0:00:00.186) 0:04:42.159 ********** 2026-03-09 00:52:27.630702 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-09 00:52:27.630708 | orchestrator | 2026-03-09 00:52:27.630715 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-09 00:52:27.630722 | orchestrator | Monday 09 March 2026 00:52:05 +0000 (0:00:00.329) 0:04:42.489 ********** 2026-03-09 00:52:27.630728 | orchestrator | changed: [testbed-manager] 2026-03-09 00:52:27.630735 | orchestrator | 2026-03-09 00:52:27.630741 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-09 00:52:27.630748 | orchestrator | 2026-03-09 00:52:27.630755 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-09 00:52:27.630761 | orchestrator | Monday 09 March 2026 00:52:10 +0000 (0:00:05.219) 0:04:47.709 ********** 2026-03-09 00:52:27.630768 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:52:27.630775 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:52:27.630781 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:52:27.630788 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:52:27.630795 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:52:27.630801 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:52:27.630808 | orchestrator | 2026-03-09 00:52:27.630815 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-09 00:52:27.630822 | orchestrator | Monday 09 March 2026 00:52:11 +0000 (0:00:00.888) 0:04:48.598 ********** 2026-03-09 00:52:27.630828 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-09 00:52:27.630835 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-09 00:52:27.630842 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-09 00:52:27.630848 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-09 00:52:27.630859 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-09 00:52:27.630866 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-09 00:52:27.630873 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-09 00:52:27.630879 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-09 00:52:27.630886 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-09 00:52:27.630894 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-09 00:52:27.630906 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-09 00:52:27.630916 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-09 00:52:27.630936 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-09 00:52:27.630948 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-09 00:52:27.630960 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-09 00:52:27.630971 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-09 00:52:27.630978 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-09 00:52:27.630984 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-09 00:52:27.630991 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-09 00:52:27.630998 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-09 00:52:27.631004 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-09 00:52:27.631011 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-09 00:52:27.631025 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-09 00:52:27.631032 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-09 00:52:27.631039 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-09 00:52:27.631045 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-09 00:52:27.631052 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-09 00:52:27.631059 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-09 00:52:27.631065 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-09 00:52:27.631072 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-09 00:52:27.631079 | orchestrator | 2026-03-09 00:52:27.631085 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-09 00:52:27.631092 | orchestrator | Monday 09 March 2026 00:52:25 +0000 (0:00:13.851) 0:05:02.449 ********** 2026-03-09 00:52:27.631099 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:52:27.631106 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:52:27.631112 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:52:27.631119 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:27.631126 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:27.631132 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:27.631139 | orchestrator | 2026-03-09 00:52:27.631146 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-09 00:52:27.631153 | orchestrator | Monday 09 March 2026 00:52:25 +0000 (0:00:00.779) 0:05:03.229 ********** 2026-03-09 00:52:27.631160 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:52:27.631167 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:52:27.631173 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:52:27.631180 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:27.631187 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:27.631209 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:27.631218 | orchestrator | 2026-03-09 00:52:27.631225 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:52:27.631232 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:52:27.631240 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-09 00:52:27.631247 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-09 00:52:27.631254 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-09 00:52:27.631261 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-09 00:52:27.631272 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-09 00:52:27.631279 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-09 00:52:27.631286 | orchestrator | 2026-03-09 00:52:27.631292 | orchestrator | 2026-03-09 00:52:27.631299 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:52:27.631306 | orchestrator | Monday 09 March 2026 00:52:26 +0000 (0:00:00.463) 0:05:03.693 ********** 2026-03-09 00:52:27.631318 | orchestrator | =============================================================================== 2026-03-09 00:52:27.631325 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 43.34s 2026-03-09 00:52:27.631332 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.33s 2026-03-09 00:52:27.631339 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 26.23s 2026-03-09 00:52:27.631350 | orchestrator | kubectl : Install required packages ------------------------------------ 17.83s 2026-03-09 00:52:27.631358 | orchestrator | Manage labels ---------------------------------------------------------- 13.85s 2026-03-09 00:52:27.631365 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 11.27s 2026-03-09 00:52:27.631371 | orchestrator | kubectl : Add repository Debian ---------------------------------------- 11.09s 2026-03-09 00:52:27.631378 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 7.25s 2026-03-09 00:52:27.631385 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.91s 2026-03-09 00:52:27.631392 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.22s 2026-03-09 00:52:27.631398 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 4.30s 2026-03-09 00:52:27.631406 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.31s 2026-03-09 00:52:27.631412 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 3.25s 2026-03-09 00:52:27.631419 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.99s 2026-03-09 00:52:27.631426 | orchestrator | k3s_server : Stop k3s-init ---------------------------------------------- 2.89s 2026-03-09 00:52:27.631432 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.79s 2026-03-09 00:52:27.631439 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 2.72s 2026-03-09 00:52:27.631446 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 2.70s 2026-03-09 00:52:27.631453 | orchestrator | k3s_server : Create /etc/rancher/k3s directory -------------------------- 2.44s 2026-03-09 00:52:27.631460 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.44s 2026-03-09 00:52:27.631466 | orchestrator | 2026-03-09 00:52:27 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:52:27.631474 | orchestrator | 2026-03-09 00:52:27 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:52:27.631481 | orchestrator | 2026-03-09 00:52:27 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:30.662084 | orchestrator | 2026-03-09 00:52:30 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:52:30.662409 | orchestrator | 2026-03-09 00:52:30 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:52:30.663168 | orchestrator | 2026-03-09 00:52:30 | INFO  | Task 6a9d89e5-a8e8-4835-a10d-876570d9470c is in state STARTED 2026-03-09 00:52:30.666617 | orchestrator | 2026-03-09 00:52:30 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:52:30.667376 | orchestrator | 2026-03-09 00:52:30 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:52:30.672455 | orchestrator | 2026-03-09 00:52:30 | INFO  | Task 3a2889dc-97d8-4fce-8924-a77a8a8d4731 is in state STARTED 2026-03-09 00:52:30.672551 | orchestrator | 2026-03-09 00:52:30 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:33.726735 | orchestrator | 2026-03-09 00:52:33 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:52:33.730585 | orchestrator | 2026-03-09 00:52:33 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:52:33.731842 | orchestrator | 2026-03-09 00:52:33 | INFO  | Task 6a9d89e5-a8e8-4835-a10d-876570d9470c is in state STARTED 2026-03-09 00:52:33.734336 | orchestrator | 2026-03-09 00:52:33 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:52:33.736520 | orchestrator | 2026-03-09 00:52:33 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:52:33.738389 | orchestrator | 2026-03-09 00:52:33 | INFO  | Task 3a2889dc-97d8-4fce-8924-a77a8a8d4731 is in state STARTED 2026-03-09 00:52:33.738458 | orchestrator | 2026-03-09 00:52:33 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:36.797917 | orchestrator | 2026-03-09 00:52:36 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:52:36.799423 | orchestrator | 2026-03-09 00:52:36 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:52:36.800075 | orchestrator | 2026-03-09 00:52:36 | INFO  | Task 6a9d89e5-a8e8-4835-a10d-876570d9470c is in state SUCCESS 2026-03-09 00:52:36.800735 | orchestrator | 2026-03-09 00:52:36 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:52:36.801751 | orchestrator | 2026-03-09 00:52:36 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:52:36.803939 | orchestrator | 2026-03-09 00:52:36 | INFO  | Task 3a2889dc-97d8-4fce-8924-a77a8a8d4731 is in state STARTED 2026-03-09 00:52:36.803976 | orchestrator | 2026-03-09 00:52:36 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:39.845003 | orchestrator | 2026-03-09 00:52:39 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:52:39.848994 | orchestrator | 2026-03-09 00:52:39 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:52:39.851766 | orchestrator | 2026-03-09 00:52:39 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:52:39.854676 | orchestrator | 2026-03-09 00:52:39 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:52:39.859081 | orchestrator | 2026-03-09 00:52:39 | INFO  | Task 3a2889dc-97d8-4fce-8924-a77a8a8d4731 is in state STARTED 2026-03-09 00:52:39.859161 | orchestrator | 2026-03-09 00:52:39 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:42.905641 | orchestrator | 2026-03-09 00:52:42 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:52:42.906436 | orchestrator | 2026-03-09 00:52:42 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:52:42.908471 | orchestrator | 2026-03-09 00:52:42 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:52:42.909503 | orchestrator | 2026-03-09 00:52:42 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:52:42.911265 | orchestrator | 2026-03-09 00:52:42 | INFO  | Task 3a2889dc-97d8-4fce-8924-a77a8a8d4731 is in state SUCCESS 2026-03-09 00:52:42.911311 | orchestrator | 2026-03-09 00:52:42 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:45.966360 | orchestrator | 2026-03-09 00:52:45 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:52:45.971279 | orchestrator | 2026-03-09 00:52:45 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:52:45.978077 | orchestrator | 2026-03-09 00:52:45 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:52:45.984383 | orchestrator | 2026-03-09 00:52:45 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:52:45.984703 | orchestrator | 2026-03-09 00:52:45 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:49.028091 | orchestrator | 2026-03-09 00:52:49 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:52:49.028595 | orchestrator | 2026-03-09 00:52:49 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:52:49.029856 | orchestrator | 2026-03-09 00:52:49 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:52:49.030984 | orchestrator | 2026-03-09 00:52:49 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:52:49.031038 | orchestrator | 2026-03-09 00:52:49 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:52.072668 | orchestrator | 2026-03-09 00:52:52 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:52:52.073586 | orchestrator | 2026-03-09 00:52:52 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:52:52.074505 | orchestrator | 2026-03-09 00:52:52 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:52:52.076911 | orchestrator | 2026-03-09 00:52:52 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:52:52.076951 | orchestrator | 2026-03-09 00:52:52 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:55.110344 | orchestrator | 2026-03-09 00:52:55 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:52:55.110492 | orchestrator | 2026-03-09 00:52:55 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:52:55.111880 | orchestrator | 2026-03-09 00:52:55 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:52:55.112705 | orchestrator | 2026-03-09 00:52:55 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:52:55.112748 | orchestrator | 2026-03-09 00:52:55 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:58.143250 | orchestrator | 2026-03-09 00:52:58 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:52:58.143355 | orchestrator | 2026-03-09 00:52:58 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:52:58.143993 | orchestrator | 2026-03-09 00:52:58 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:52:58.144616 | orchestrator | 2026-03-09 00:52:58 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:52:58.144646 | orchestrator | 2026-03-09 00:52:58 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:01.188561 | orchestrator | 2026-03-09 00:53:01 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:53:01.189058 | orchestrator | 2026-03-09 00:53:01 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:53:01.189899 | orchestrator | 2026-03-09 00:53:01 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:53:01.191485 | orchestrator | 2026-03-09 00:53:01 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:53:01.191524 | orchestrator | 2026-03-09 00:53:01 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:04.230418 | orchestrator | 2026-03-09 00:53:04 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:53:04.232467 | orchestrator | 2026-03-09 00:53:04 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:53:04.233444 | orchestrator | 2026-03-09 00:53:04 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:53:04.235692 | orchestrator | 2026-03-09 00:53:04 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:53:04.235742 | orchestrator | 2026-03-09 00:53:04 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:07.278405 | orchestrator | 2026-03-09 00:53:07 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:53:07.278496 | orchestrator | 2026-03-09 00:53:07 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:53:07.278510 | orchestrator | 2026-03-09 00:53:07 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:53:07.279630 | orchestrator | 2026-03-09 00:53:07 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:53:07.279665 | orchestrator | 2026-03-09 00:53:07 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:10.323374 | orchestrator | 2026-03-09 00:53:10 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:53:10.323460 | orchestrator | 2026-03-09 00:53:10 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:53:10.323469 | orchestrator | 2026-03-09 00:53:10 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:53:10.323476 | orchestrator | 2026-03-09 00:53:10 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:53:10.323483 | orchestrator | 2026-03-09 00:53:10 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:13.386567 | orchestrator | 2026-03-09 00:53:13 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:53:13.388839 | orchestrator | 2026-03-09 00:53:13 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:53:13.392059 | orchestrator | 2026-03-09 00:53:13 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:53:13.394476 | orchestrator | 2026-03-09 00:53:13 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:53:13.394585 | orchestrator | 2026-03-09 00:53:13 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:16.470880 | orchestrator | 2026-03-09 00:53:16 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:53:16.470965 | orchestrator | 2026-03-09 00:53:16 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:53:16.470974 | orchestrator | 2026-03-09 00:53:16 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:53:16.470980 | orchestrator | 2026-03-09 00:53:16 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:53:16.470986 | orchestrator | 2026-03-09 00:53:16 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:19.517518 | orchestrator | 2026-03-09 00:53:19 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:53:19.518596 | orchestrator | 2026-03-09 00:53:19 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:53:19.520019 | orchestrator | 2026-03-09 00:53:19 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:53:19.521743 | orchestrator | 2026-03-09 00:53:19 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:53:19.521881 | orchestrator | 2026-03-09 00:53:19 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:22.558981 | orchestrator | 2026-03-09 00:53:22 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:53:22.563187 | orchestrator | 2026-03-09 00:53:22 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:53:22.563894 | orchestrator | 2026-03-09 00:53:22 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:53:22.564912 | orchestrator | 2026-03-09 00:53:22 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:53:22.565012 | orchestrator | 2026-03-09 00:53:22 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:25.616474 | orchestrator | 2026-03-09 00:53:25 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:53:25.616559 | orchestrator | 2026-03-09 00:53:25 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:53:25.616573 | orchestrator | 2026-03-09 00:53:25 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:53:25.616585 | orchestrator | 2026-03-09 00:53:25 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:53:25.616596 | orchestrator | 2026-03-09 00:53:25 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:28.624201 | orchestrator | 2026-03-09 00:53:28 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:53:28.624651 | orchestrator | 2026-03-09 00:53:28 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:53:28.625297 | orchestrator | 2026-03-09 00:53:28 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:53:28.626200 | orchestrator | 2026-03-09 00:53:28 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:53:28.626229 | orchestrator | 2026-03-09 00:53:28 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:31.660968 | orchestrator | 2026-03-09 00:53:31 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:53:31.661420 | orchestrator | 2026-03-09 00:53:31 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:53:31.663485 | orchestrator | 2026-03-09 00:53:31 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:53:31.664079 | orchestrator | 2026-03-09 00:53:31 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:53:31.664109 | orchestrator | 2026-03-09 00:53:31 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:34.699360 | orchestrator | 2026-03-09 00:53:34 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:53:34.700264 | orchestrator | 2026-03-09 00:53:34 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:53:34.701032 | orchestrator | 2026-03-09 00:53:34 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:53:34.702209 | orchestrator | 2026-03-09 00:53:34 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:53:34.702250 | orchestrator | 2026-03-09 00:53:34 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:37.736777 | orchestrator | 2026-03-09 00:53:37 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:53:37.738383 | orchestrator | 2026-03-09 00:53:37 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:53:37.740233 | orchestrator | 2026-03-09 00:53:37 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:53:37.742361 | orchestrator | 2026-03-09 00:53:37 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:53:37.742408 | orchestrator | 2026-03-09 00:53:37 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:40.779751 | orchestrator | 2026-03-09 00:53:40 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:53:40.780161 | orchestrator | 2026-03-09 00:53:40 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:53:40.780907 | orchestrator | 2026-03-09 00:53:40 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:53:40.781733 | orchestrator | 2026-03-09 00:53:40 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:53:40.781889 | orchestrator | 2026-03-09 00:53:40 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:43.823199 | orchestrator | 2026-03-09 00:53:43 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:53:43.823882 | orchestrator | 2026-03-09 00:53:43 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:53:43.824664 | orchestrator | 2026-03-09 00:53:43 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:53:43.825544 | orchestrator | 2026-03-09 00:53:43 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:53:43.825578 | orchestrator | 2026-03-09 00:53:43 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:46.864444 | orchestrator | 2026-03-09 00:53:46 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:53:46.864777 | orchestrator | 2026-03-09 00:53:46 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:53:46.865743 | orchestrator | 2026-03-09 00:53:46 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:53:46.866782 | orchestrator | 2026-03-09 00:53:46 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:53:46.866816 | orchestrator | 2026-03-09 00:53:46 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:49.908274 | orchestrator | 2026-03-09 00:53:49 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:53:49.909031 | orchestrator | 2026-03-09 00:53:49 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:53:49.911256 | orchestrator | 2026-03-09 00:53:49 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:53:49.912510 | orchestrator | 2026-03-09 00:53:49 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:53:49.912807 | orchestrator | 2026-03-09 00:53:49 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:52.949261 | orchestrator | 2026-03-09 00:53:52 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:53:52.950311 | orchestrator | 2026-03-09 00:53:52 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:53:52.951045 | orchestrator | 2026-03-09 00:53:52 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:53:52.952022 | orchestrator | 2026-03-09 00:53:52 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:53:52.952349 | orchestrator | 2026-03-09 00:53:52 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:55.983200 | orchestrator | 2026-03-09 00:53:55 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:53:55.984024 | orchestrator | 2026-03-09 00:53:55 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:53:55.985343 | orchestrator | 2026-03-09 00:53:55 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:53:55.986377 | orchestrator | 2026-03-09 00:53:55 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:53:55.986558 | orchestrator | 2026-03-09 00:53:55 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:59.021540 | orchestrator | 2026-03-09 00:53:59 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:53:59.022481 | orchestrator | 2026-03-09 00:53:59 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:53:59.024045 | orchestrator | 2026-03-09 00:53:59 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:53:59.024952 | orchestrator | 2026-03-09 00:53:59 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:53:59.025039 | orchestrator | 2026-03-09 00:53:59 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:02.055248 | orchestrator | 2026-03-09 00:54:02 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:54:02.056827 | orchestrator | 2026-03-09 00:54:02 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:54:02.057664 | orchestrator | 2026-03-09 00:54:02 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:54:02.058478 | orchestrator | 2026-03-09 00:54:02 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:54:02.058518 | orchestrator | 2026-03-09 00:54:02 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:05.094291 | orchestrator | 2026-03-09 00:54:05 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:54:05.095628 | orchestrator | 2026-03-09 00:54:05 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:54:05.097879 | orchestrator | 2026-03-09 00:54:05 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:54:05.099252 | orchestrator | 2026-03-09 00:54:05 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:54:05.099281 | orchestrator | 2026-03-09 00:54:05 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:08.133696 | orchestrator | 2026-03-09 00:54:08 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:54:08.133862 | orchestrator | 2026-03-09 00:54:08 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:54:08.134542 | orchestrator | 2026-03-09 00:54:08 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:54:08.135021 | orchestrator | 2026-03-09 00:54:08 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:54:08.135167 | orchestrator | 2026-03-09 00:54:08 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:11.186965 | orchestrator | 2026-03-09 00:54:11 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:54:11.187259 | orchestrator | 2026-03-09 00:54:11 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:54:11.188346 | orchestrator | 2026-03-09 00:54:11 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:54:11.188797 | orchestrator | 2026-03-09 00:54:11 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:54:11.188883 | orchestrator | 2026-03-09 00:54:11 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:14.229387 | orchestrator | 2026-03-09 00:54:14 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:54:14.230710 | orchestrator | 2026-03-09 00:54:14 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:54:14.232800 | orchestrator | 2026-03-09 00:54:14 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:54:14.234821 | orchestrator | 2026-03-09 00:54:14 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:54:14.234881 | orchestrator | 2026-03-09 00:54:14 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:17.280585 | orchestrator | 2026-03-09 00:54:17 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:54:17.282374 | orchestrator | 2026-03-09 00:54:17 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:54:17.283543 | orchestrator | 2026-03-09 00:54:17 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:54:17.285140 | orchestrator | 2026-03-09 00:54:17 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:54:17.285583 | orchestrator | 2026-03-09 00:54:17 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:20.331697 | orchestrator | 2026-03-09 00:54:20 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:54:20.331796 | orchestrator | 2026-03-09 00:54:20 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state STARTED 2026-03-09 00:54:20.331805 | orchestrator | 2026-03-09 00:54:20 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:54:20.332503 | orchestrator | 2026-03-09 00:54:20 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:54:20.333025 | orchestrator | 2026-03-09 00:54:20 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:23.371787 | orchestrator | 2026-03-09 00:54:23 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:54:23.372868 | orchestrator | 2026-03-09 00:54:23.372904 | orchestrator | 2026-03-09 00:54:23.372914 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-03-09 00:54:23.372923 | orchestrator | 2026-03-09 00:54:23.372933 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-09 00:54:23.372942 | orchestrator | Monday 09 March 2026 00:52:31 +0000 (0:00:00.166) 0:00:00.166 ********** 2026-03-09 00:54:23.372950 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-09 00:54:23.372959 | orchestrator | 2026-03-09 00:54:23.372967 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-09 00:54:23.372975 | orchestrator | Monday 09 March 2026 00:52:32 +0000 (0:00:00.797) 0:00:00.964 ********** 2026-03-09 00:54:23.372983 | orchestrator | changed: [testbed-manager] 2026-03-09 00:54:23.372991 | orchestrator | 2026-03-09 00:54:23.372999 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-03-09 00:54:23.373007 | orchestrator | Monday 09 March 2026 00:52:33 +0000 (0:00:01.468) 0:00:02.432 ********** 2026-03-09 00:54:23.373015 | orchestrator | changed: [testbed-manager] 2026-03-09 00:54:23.373023 | orchestrator | 2026-03-09 00:54:23.373031 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:54:23.373039 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:54:23.373049 | orchestrator | 2026-03-09 00:54:23.373057 | orchestrator | 2026-03-09 00:54:23.373065 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:54:23.373114 | orchestrator | Monday 09 March 2026 00:52:34 +0000 (0:00:00.574) 0:00:03.007 ********** 2026-03-09 00:54:23.373122 | orchestrator | =============================================================================== 2026-03-09 00:54:23.373130 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.47s 2026-03-09 00:54:23.373162 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.80s 2026-03-09 00:54:23.373170 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.57s 2026-03-09 00:54:23.373178 | orchestrator | 2026-03-09 00:54:23.373185 | orchestrator | 2026-03-09 00:54:23.373193 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-09 00:54:23.373201 | orchestrator | 2026-03-09 00:54:23.373209 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-09 00:54:23.373216 | orchestrator | Monday 09 March 2026 00:52:31 +0000 (0:00:00.194) 0:00:00.194 ********** 2026-03-09 00:54:23.373224 | orchestrator | ok: [testbed-manager] 2026-03-09 00:54:23.373233 | orchestrator | 2026-03-09 00:54:23.373244 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-09 00:54:23.373257 | orchestrator | Monday 09 March 2026 00:52:32 +0000 (0:00:00.849) 0:00:01.044 ********** 2026-03-09 00:54:23.373472 | orchestrator | ok: [testbed-manager] 2026-03-09 00:54:23.373495 | orchestrator | 2026-03-09 00:54:23.373505 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-09 00:54:23.373514 | orchestrator | Monday 09 March 2026 00:52:33 +0000 (0:00:00.774) 0:00:01.819 ********** 2026-03-09 00:54:23.373523 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-09 00:54:23.373532 | orchestrator | 2026-03-09 00:54:23.373541 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-09 00:54:23.373550 | orchestrator | Monday 09 March 2026 00:52:34 +0000 (0:00:00.776) 0:00:02.595 ********** 2026-03-09 00:54:23.373560 | orchestrator | changed: [testbed-manager] 2026-03-09 00:54:23.373569 | orchestrator | 2026-03-09 00:54:23.373577 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-09 00:54:23.373586 | orchestrator | Monday 09 March 2026 00:52:35 +0000 (0:00:01.766) 0:00:04.362 ********** 2026-03-09 00:54:23.373596 | orchestrator | changed: [testbed-manager] 2026-03-09 00:54:23.373605 | orchestrator | 2026-03-09 00:54:23.373614 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-09 00:54:23.373624 | orchestrator | Monday 09 March 2026 00:52:36 +0000 (0:00:00.680) 0:00:05.043 ********** 2026-03-09 00:54:23.373633 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-09 00:54:23.373642 | orchestrator | 2026-03-09 00:54:23.373651 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-09 00:54:23.373660 | orchestrator | Monday 09 March 2026 00:52:38 +0000 (0:00:01.642) 0:00:06.685 ********** 2026-03-09 00:54:23.373669 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-09 00:54:23.373678 | orchestrator | 2026-03-09 00:54:23.373688 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-09 00:54:23.373697 | orchestrator | Monday 09 March 2026 00:52:39 +0000 (0:00:00.869) 0:00:07.554 ********** 2026-03-09 00:54:23.373707 | orchestrator | ok: [testbed-manager] 2026-03-09 00:54:23.373716 | orchestrator | 2026-03-09 00:54:23.373724 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-09 00:54:23.373732 | orchestrator | Monday 09 March 2026 00:52:39 +0000 (0:00:00.407) 0:00:07.962 ********** 2026-03-09 00:54:23.373740 | orchestrator | ok: [testbed-manager] 2026-03-09 00:54:23.373748 | orchestrator | 2026-03-09 00:54:23.373755 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:54:23.373764 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:54:23.373772 | orchestrator | 2026-03-09 00:54:23.373779 | orchestrator | 2026-03-09 00:54:23.373787 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:54:23.373795 | orchestrator | Monday 09 March 2026 00:52:39 +0000 (0:00:00.346) 0:00:08.308 ********** 2026-03-09 00:54:23.373815 | orchestrator | =============================================================================== 2026-03-09 00:54:23.373823 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.77s 2026-03-09 00:54:23.373841 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.64s 2026-03-09 00:54:23.373849 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.87s 2026-03-09 00:54:23.373870 | orchestrator | Get home directory of operator user ------------------------------------- 0.85s 2026-03-09 00:54:23.373878 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.78s 2026-03-09 00:54:23.373886 | orchestrator | Create .kube directory -------------------------------------------------- 0.77s 2026-03-09 00:54:23.373894 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.68s 2026-03-09 00:54:23.373901 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.41s 2026-03-09 00:54:23.373909 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.35s 2026-03-09 00:54:23.373940 | orchestrator | 2026-03-09 00:54:23.373949 | orchestrator | 2026-03-09 00:54:23 | INFO  | Task 6f359196-55c7-4b0f-a370-cf0da13b6880 is in state SUCCESS 2026-03-09 00:54:23.374990 | orchestrator | 2026-03-09 00:54:23.375025 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-03-09 00:54:23.375034 | orchestrator | 2026-03-09 00:54:23.375043 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-09 00:54:23.375052 | orchestrator | Monday 09 March 2026 00:51:03 +0000 (0:00:00.149) 0:00:00.149 ********** 2026-03-09 00:54:23.375060 | orchestrator | ok: [localhost] => { 2026-03-09 00:54:23.375115 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-03-09 00:54:23.375125 | orchestrator | } 2026-03-09 00:54:23.375134 | orchestrator | 2026-03-09 00:54:23.375142 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-03-09 00:54:23.375150 | orchestrator | Monday 09 March 2026 00:51:03 +0000 (0:00:00.060) 0:00:00.210 ********** 2026-03-09 00:54:23.375159 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-03-09 00:54:23.375170 | orchestrator | ...ignoring 2026-03-09 00:54:23.375178 | orchestrator | 2026-03-09 00:54:23.375187 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-03-09 00:54:23.375195 | orchestrator | Monday 09 March 2026 00:51:06 +0000 (0:00:03.242) 0:00:03.452 ********** 2026-03-09 00:54:23.375203 | orchestrator | skipping: [localhost] 2026-03-09 00:54:23.375210 | orchestrator | 2026-03-09 00:54:23.375218 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-03-09 00:54:23.375226 | orchestrator | Monday 09 March 2026 00:51:06 +0000 (0:00:00.070) 0:00:03.522 ********** 2026-03-09 00:54:23.375234 | orchestrator | ok: [localhost] 2026-03-09 00:54:23.375242 | orchestrator | 2026-03-09 00:54:23.375250 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 00:54:23.375258 | orchestrator | 2026-03-09 00:54:23.375265 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 00:54:23.375273 | orchestrator | Monday 09 March 2026 00:51:06 +0000 (0:00:00.267) 0:00:03.790 ********** 2026-03-09 00:54:23.375281 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:54:23.375289 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:54:23.375297 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:54:23.375304 | orchestrator | 2026-03-09 00:54:23.375312 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 00:54:23.375320 | orchestrator | Monday 09 March 2026 00:51:07 +0000 (0:00:00.624) 0:00:04.414 ********** 2026-03-09 00:54:23.375328 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-09 00:54:23.375336 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-09 00:54:23.375344 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-09 00:54:23.375351 | orchestrator | 2026-03-09 00:54:23.375359 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-09 00:54:23.375367 | orchestrator | 2026-03-09 00:54:23.375409 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-09 00:54:23.375418 | orchestrator | Monday 09 March 2026 00:51:08 +0000 (0:00:01.149) 0:00:05.564 ********** 2026-03-09 00:54:23.375426 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:54:23.375434 | orchestrator | 2026-03-09 00:54:23.375441 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-09 00:54:23.375449 | orchestrator | Monday 09 March 2026 00:51:09 +0000 (0:00:00.666) 0:00:06.231 ********** 2026-03-09 00:54:23.375457 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:54:23.375465 | orchestrator | 2026-03-09 00:54:23.375472 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-09 00:54:23.375480 | orchestrator | Monday 09 March 2026 00:51:10 +0000 (0:00:01.154) 0:00:07.385 ********** 2026-03-09 00:54:23.375488 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:23.375501 | orchestrator | 2026-03-09 00:54:23.375513 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-09 00:54:23.375525 | orchestrator | Monday 09 March 2026 00:51:10 +0000 (0:00:00.363) 0:00:07.749 ********** 2026-03-09 00:54:23.375536 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:23.375547 | orchestrator | 2026-03-09 00:54:23.375559 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-09 00:54:23.375571 | orchestrator | Monday 09 March 2026 00:51:11 +0000 (0:00:00.372) 0:00:08.122 ********** 2026-03-09 00:54:23.375583 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:23.375595 | orchestrator | 2026-03-09 00:54:23.375609 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-09 00:54:23.375621 | orchestrator | Monday 09 March 2026 00:51:11 +0000 (0:00:00.590) 0:00:08.712 ********** 2026-03-09 00:54:23.375653 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:23.375667 | orchestrator | 2026-03-09 00:54:23.375690 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-09 00:54:23.375704 | orchestrator | Monday 09 March 2026 00:51:13 +0000 (0:00:01.479) 0:00:10.192 ********** 2026-03-09 00:54:23.375718 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:54:23.375726 | orchestrator | 2026-03-09 00:54:23.375734 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-09 00:54:23.375742 | orchestrator | Monday 09 March 2026 00:51:13 +0000 (0:00:00.810) 0:00:11.003 ********** 2026-03-09 00:54:23.375750 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:54:23.375758 | orchestrator | 2026-03-09 00:54:23.375765 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-09 00:54:23.375773 | orchestrator | Monday 09 March 2026 00:51:14 +0000 (0:00:00.807) 0:00:11.810 ********** 2026-03-09 00:54:23.375781 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:23.375789 | orchestrator | 2026-03-09 00:54:23.375796 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-09 00:54:23.375804 | orchestrator | Monday 09 March 2026 00:51:15 +0000 (0:00:00.394) 0:00:12.205 ********** 2026-03-09 00:54:23.375812 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:23.375820 | orchestrator | 2026-03-09 00:54:23.375844 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-09 00:54:23.375853 | orchestrator | Monday 09 March 2026 00:51:16 +0000 (0:00:00.925) 0:00:13.131 ********** 2026-03-09 00:54:23.375865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-09 00:54:23.375886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-09 00:54:23.375900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-09 00:54:23.375909 | orchestrator | 2026-03-09 00:54:23.375917 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-09 00:54:23.375925 | orchestrator | Monday 09 March 2026 00:51:17 +0000 (0:00:01.670) 0:00:14.802 ********** 2026-03-09 00:54:23.375941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-09 00:54:23.375951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-09 00:54:23.375964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-09 00:54:23.375973 | orchestrator | 2026-03-09 00:54:23.375981 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-09 00:54:23.375989 | orchestrator | Monday 09 March 2026 00:51:20 +0000 (0:00:02.964) 0:00:17.767 ********** 2026-03-09 00:54:23.375997 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-09 00:54:23.376005 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-09 00:54:23.376013 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-09 00:54:23.376021 | orchestrator | 2026-03-09 00:54:23.376029 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-09 00:54:23.376037 | orchestrator | Monday 09 March 2026 00:51:22 +0000 (0:00:01.841) 0:00:19.609 ********** 2026-03-09 00:54:23.376048 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-09 00:54:23.376056 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-09 00:54:23.376064 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-09 00:54:23.376134 | orchestrator | 2026-03-09 00:54:23.376146 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-09 00:54:23.376160 | orchestrator | Monday 09 March 2026 00:51:24 +0000 (0:00:01.760) 0:00:21.370 ********** 2026-03-09 00:54:23.376171 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-09 00:54:23.376183 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-09 00:54:23.376197 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-09 00:54:23.376211 | orchestrator | 2026-03-09 00:54:23.376223 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-09 00:54:23.376244 | orchestrator | Monday 09 March 2026 00:51:25 +0000 (0:00:01.532) 0:00:22.902 ********** 2026-03-09 00:54:23.376258 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-09 00:54:23.376267 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-09 00:54:23.376275 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-09 00:54:23.376283 | orchestrator | 2026-03-09 00:54:23.376291 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-09 00:54:23.376298 | orchestrator | Monday 09 March 2026 00:51:27 +0000 (0:00:01.840) 0:00:24.743 ********** 2026-03-09 00:54:23.376306 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-09 00:54:23.376314 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-09 00:54:23.376322 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-09 00:54:23.376329 | orchestrator | 2026-03-09 00:54:23.376337 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-09 00:54:23.376345 | orchestrator | Monday 09 March 2026 00:51:29 +0000 (0:00:01.604) 0:00:26.349 ********** 2026-03-09 00:54:23.376352 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-09 00:54:23.376360 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-09 00:54:23.376368 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-09 00:54:23.376376 | orchestrator | 2026-03-09 00:54:23.376383 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-09 00:54:23.376391 | orchestrator | Monday 09 March 2026 00:51:31 +0000 (0:00:01.974) 0:00:28.323 ********** 2026-03-09 00:54:23.376399 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:54:23.376407 | orchestrator | 2026-03-09 00:54:23.376414 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-03-09 00:54:23.376422 | orchestrator | Monday 09 March 2026 00:51:32 +0000 (0:00:01.112) 0:00:29.435 ********** 2026-03-09 00:54:23.376431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-09 00:54:23.376445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-09 00:54:23.376467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-09 00:54:23.376476 | orchestrator | 2026-03-09 00:54:23.376484 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-03-09 00:54:23.376492 | orchestrator | Monday 09 March 2026 00:51:33 +0000 (0:00:01.534) 0:00:30.970 ********** 2026-03-09 00:54:23.376500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-09 00:54:23.376509 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:23.376518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-09 00:54:23.376531 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:54:23.376548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-09 00:54:23.376560 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:54:23.376573 | orchestrator | 2026-03-09 00:54:23.376592 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-03-09 00:54:23.376606 | orchestrator | Monday 09 March 2026 00:51:34 +0000 (0:00:00.522) 0:00:31.492 ********** 2026-03-09 00:54:23.376619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-09 00:54:23.376634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-09 00:54:23.376648 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:23.376661 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:54:23.376678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-09 00:54:23.376701 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:54:23.376714 | orchestrator | 2026-03-09 00:54:23.376727 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-03-09 00:54:23.376739 | orchestrator | Monday 09 March 2026 00:51:35 +0000 (0:00:00.972) 0:00:32.465 ********** 2026-03-09 00:54:23.376761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-09 00:54:23.376775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-09 00:54:23.376790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-09 00:54:23.376813 | orchestrator | 2026-03-09 00:54:23.376826 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-03-09 00:54:23.376838 | orchestrator | Monday 09 March 2026 00:51:36 +0000 (0:00:01.083) 0:00:33.548 ********** 2026-03-09 00:54:23.376846 | orchestrator | changed: [testbed-node-0] => { 2026-03-09 00:54:23.376854 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:54:23.376862 | orchestrator | } 2026-03-09 00:54:23.376870 | orchestrator | changed: [testbed-node-1] => { 2026-03-09 00:54:23.376878 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:54:23.376886 | orchestrator | } 2026-03-09 00:54:23.376894 | orchestrator | changed: [testbed-node-2] => { 2026-03-09 00:54:23.376901 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:54:23.376918 | orchestrator | } 2026-03-09 00:54:23.376926 | orchestrator | 2026-03-09 00:54:23.376934 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-09 00:54:23.376942 | orchestrator | Monday 09 March 2026 00:51:36 +0000 (0:00:00.435) 0:00:33.984 ********** 2026-03-09 00:54:23.376963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-09 00:54:23.376972 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:54:23.376981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-09 00:54:23.376989 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:23.376997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-09 00:54:23.377014 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:54:23.377022 | orchestrator | 2026-03-09 00:54:23.377030 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-03-09 00:54:23.377038 | orchestrator | Monday 09 March 2026 00:51:38 +0000 (0:00:01.354) 0:00:35.339 ********** 2026-03-09 00:54:23.377046 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:54:23.377054 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:54:23.377061 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:54:23.377094 | orchestrator | 2026-03-09 00:54:23.377108 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-03-09 00:54:23.377122 | orchestrator | Monday 09 March 2026 00:51:39 +0000 (0:00:01.134) 0:00:36.474 ********** 2026-03-09 00:54:23.377134 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:54:23.377145 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:54:23.377153 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:54:23.377161 | orchestrator | 2026-03-09 00:54:23.377169 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-09 00:54:23.377182 | orchestrator | Monday 09 March 2026 00:51:48 +0000 (0:00:09.503) 0:00:45.978 ********** 2026-03-09 00:54:23.377190 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:54:23.377197 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:54:23.377205 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:54:23.377213 | orchestrator | 2026-03-09 00:54:23.377221 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-09 00:54:23.377229 | orchestrator | 2026-03-09 00:54:23.377236 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-09 00:54:23.377244 | orchestrator | Monday 09 March 2026 00:51:49 +0000 (0:00:00.613) 0:00:46.591 ********** 2026-03-09 00:54:23.377252 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:54:23.377260 | orchestrator | 2026-03-09 00:54:23.377268 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-09 00:54:23.377276 | orchestrator | Monday 09 March 2026 00:51:50 +0000 (0:00:00.895) 0:00:47.486 ********** 2026-03-09 00:54:23.377283 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:23.377291 | orchestrator | 2026-03-09 00:54:23.377299 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-09 00:54:23.377307 | orchestrator | Monday 09 March 2026 00:51:50 +0000 (0:00:00.361) 0:00:47.848 ********** 2026-03-09 00:54:23.377315 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:54:23.377323 | orchestrator | 2026-03-09 00:54:23.377336 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-09 00:54:23.377344 | orchestrator | Monday 09 March 2026 00:51:53 +0000 (0:00:02.396) 0:00:50.246 ********** 2026-03-09 00:54:23.377352 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:54:23.377360 | orchestrator | 2026-03-09 00:54:23.377367 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-09 00:54:23.377375 | orchestrator | 2026-03-09 00:54:23.377383 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-09 00:54:23.377391 | orchestrator | Monday 09 March 2026 00:53:47 +0000 (0:01:53.948) 0:02:44.195 ********** 2026-03-09 00:54:23.377399 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:54:23.377406 | orchestrator | 2026-03-09 00:54:23.377414 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-09 00:54:23.377422 | orchestrator | Monday 09 March 2026 00:53:47 +0000 (0:00:00.729) 0:02:44.925 ********** 2026-03-09 00:54:23.377436 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:54:23.377444 | orchestrator | 2026-03-09 00:54:23.377452 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-09 00:54:23.377460 | orchestrator | Monday 09 March 2026 00:53:48 +0000 (0:00:00.132) 0:02:45.058 ********** 2026-03-09 00:54:23.377467 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:54:23.377475 | orchestrator | 2026-03-09 00:54:23.377483 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-09 00:54:23.377491 | orchestrator | Monday 09 March 2026 00:53:49 +0000 (0:00:01.735) 0:02:46.794 ********** 2026-03-09 00:54:23.377498 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:54:23.377506 | orchestrator | 2026-03-09 00:54:23.377514 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-09 00:54:23.377522 | orchestrator | 2026-03-09 00:54:23.377529 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-09 00:54:23.377537 | orchestrator | Monday 09 March 2026 00:54:01 +0000 (0:00:12.154) 0:02:58.948 ********** 2026-03-09 00:54:23.377545 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:54:23.377552 | orchestrator | 2026-03-09 00:54:23.377560 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-09 00:54:23.377568 | orchestrator | Monday 09 March 2026 00:54:02 +0000 (0:00:00.855) 0:02:59.804 ********** 2026-03-09 00:54:23.377576 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:54:23.377584 | orchestrator | 2026-03-09 00:54:23.377591 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-09 00:54:23.377599 | orchestrator | Monday 09 March 2026 00:54:02 +0000 (0:00:00.163) 0:02:59.968 ********** 2026-03-09 00:54:23.377607 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:54:23.377614 | orchestrator | 2026-03-09 00:54:23.377622 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-09 00:54:23.377630 | orchestrator | Monday 09 March 2026 00:54:05 +0000 (0:00:02.424) 0:03:02.392 ********** 2026-03-09 00:54:23.377638 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:54:23.377645 | orchestrator | 2026-03-09 00:54:23.377653 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-09 00:54:23.377661 | orchestrator | 2026-03-09 00:54:23.377669 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-09 00:54:23.377677 | orchestrator | Monday 09 March 2026 00:54:18 +0000 (0:00:13.099) 0:03:15.491 ********** 2026-03-09 00:54:23.377685 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:54:23.377692 | orchestrator | 2026-03-09 00:54:23.377700 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-09 00:54:23.377708 | orchestrator | Monday 09 March 2026 00:54:19 +0000 (0:00:00.763) 0:03:16.255 ********** 2026-03-09 00:54:23.377716 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:54:23.377724 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:54:23.377732 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:54:23.377742 | orchestrator | 2026-03-09 00:54:23.377755 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:54:23.377768 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-09 00:54:23.377783 | orchestrator | testbed-node-0 : ok=26  changed=16  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-03-09 00:54:23.377796 | orchestrator | testbed-node-1 : ok=24  changed=16  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-09 00:54:23.377814 | orchestrator | testbed-node-2 : ok=24  changed=16  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-09 00:54:23.377826 | orchestrator | 2026-03-09 00:54:23.377838 | orchestrator | 2026-03-09 00:54:23.377850 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:54:23.377872 | orchestrator | Monday 09 March 2026 00:54:22 +0000 (0:00:02.929) 0:03:19.184 ********** 2026-03-09 00:54:23.377885 | orchestrator | =============================================================================== 2026-03-09 00:54:23.377899 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------ 139.20s 2026-03-09 00:54:23.377912 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 9.50s 2026-03-09 00:54:23.377925 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 6.56s 2026-03-09 00:54:23.377937 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.24s 2026-03-09 00:54:23.377946 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.96s 2026-03-09 00:54:23.377953 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.93s 2026-03-09 00:54:23.377961 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.48s 2026-03-09 00:54:23.377975 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.97s 2026-03-09 00:54:23.377983 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.84s 2026-03-09 00:54:23.377991 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.84s 2026-03-09 00:54:23.377998 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.76s 2026-03-09 00:54:23.378006 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.66s 2026-03-09 00:54:23.378139 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.61s 2026-03-09 00:54:23.378152 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 1.53s 2026-03-09 00:54:23.378161 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.53s 2026-03-09 00:54:23.378169 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 1.48s 2026-03-09 00:54:23.378176 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.35s 2026-03-09 00:54:23.378184 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.15s 2026-03-09 00:54:23.378192 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.15s 2026-03-09 00:54:23.378200 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.13s 2026-03-09 00:54:23.378208 | orchestrator | 2026-03-09 00:54:23 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:54:23.378447 | orchestrator | 2026-03-09 00:54:23 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:54:23.378547 | orchestrator | 2026-03-09 00:54:23 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:26.411363 | orchestrator | 2026-03-09 00:54:26 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:54:26.411974 | orchestrator | 2026-03-09 00:54:26 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:54:26.413642 | orchestrator | 2026-03-09 00:54:26 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:54:26.413712 | orchestrator | 2026-03-09 00:54:26 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:29.452366 | orchestrator | 2026-03-09 00:54:29 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:54:29.452440 | orchestrator | 2026-03-09 00:54:29 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:54:29.453506 | orchestrator | 2026-03-09 00:54:29 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:54:29.453595 | orchestrator | 2026-03-09 00:54:29 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:32.487813 | orchestrator | 2026-03-09 00:54:32 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:54:32.488303 | orchestrator | 2026-03-09 00:54:32 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:54:32.489295 | orchestrator | 2026-03-09 00:54:32 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:54:32.489642 | orchestrator | 2026-03-09 00:54:32 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:35.537583 | orchestrator | 2026-03-09 00:54:35 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:54:35.538620 | orchestrator | 2026-03-09 00:54:35 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:54:35.539535 | orchestrator | 2026-03-09 00:54:35 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:54:35.539569 | orchestrator | 2026-03-09 00:54:35 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:38.585653 | orchestrator | 2026-03-09 00:54:38 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:54:38.586621 | orchestrator | 2026-03-09 00:54:38 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:54:38.588215 | orchestrator | 2026-03-09 00:54:38 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:54:38.588265 | orchestrator | 2026-03-09 00:54:38 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:41.630432 | orchestrator | 2026-03-09 00:54:41 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:54:41.631910 | orchestrator | 2026-03-09 00:54:41 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:54:41.634551 | orchestrator | 2026-03-09 00:54:41 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:54:41.636625 | orchestrator | 2026-03-09 00:54:41 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:44.672724 | orchestrator | 2026-03-09 00:54:44 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:54:44.673525 | orchestrator | 2026-03-09 00:54:44 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:54:44.675749 | orchestrator | 2026-03-09 00:54:44 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:54:44.675943 | orchestrator | 2026-03-09 00:54:44 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:47.737167 | orchestrator | 2026-03-09 00:54:47 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:54:47.739965 | orchestrator | 2026-03-09 00:54:47 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:54:47.745088 | orchestrator | 2026-03-09 00:54:47 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:54:47.745880 | orchestrator | 2026-03-09 00:54:47 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:50.788190 | orchestrator | 2026-03-09 00:54:50 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:54:50.789294 | orchestrator | 2026-03-09 00:54:50 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:54:50.791410 | orchestrator | 2026-03-09 00:54:50 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:54:50.791461 | orchestrator | 2026-03-09 00:54:50 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:53.824753 | orchestrator | 2026-03-09 00:54:53 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:54:53.825606 | orchestrator | 2026-03-09 00:54:53 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:54:53.828224 | orchestrator | 2026-03-09 00:54:53 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:54:53.828293 | orchestrator | 2026-03-09 00:54:53 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:56.872903 | orchestrator | 2026-03-09 00:54:56 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:54:56.874985 | orchestrator | 2026-03-09 00:54:56 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:54:56.877444 | orchestrator | 2026-03-09 00:54:56 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:54:56.877760 | orchestrator | 2026-03-09 00:54:56 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:59.921375 | orchestrator | 2026-03-09 00:54:59 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:54:59.922374 | orchestrator | 2026-03-09 00:54:59 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:54:59.923796 | orchestrator | 2026-03-09 00:54:59 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:54:59.923833 | orchestrator | 2026-03-09 00:54:59 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:02.962945 | orchestrator | 2026-03-09 00:55:02 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:55:02.964689 | orchestrator | 2026-03-09 00:55:02 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:55:02.965400 | orchestrator | 2026-03-09 00:55:02 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:55:02.965421 | orchestrator | 2026-03-09 00:55:02 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:06.003787 | orchestrator | 2026-03-09 00:55:06 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:55:06.007077 | orchestrator | 2026-03-09 00:55:06 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:55:06.007146 | orchestrator | 2026-03-09 00:55:06 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:55:06.007161 | orchestrator | 2026-03-09 00:55:06 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:09.041541 | orchestrator | 2026-03-09 00:55:09 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:55:09.043502 | orchestrator | 2026-03-09 00:55:09 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:55:09.044355 | orchestrator | 2026-03-09 00:55:09 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:55:09.044390 | orchestrator | 2026-03-09 00:55:09 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:12.086137 | orchestrator | 2026-03-09 00:55:12 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:55:12.086820 | orchestrator | 2026-03-09 00:55:12 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:55:12.088276 | orchestrator | 2026-03-09 00:55:12 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:55:12.088315 | orchestrator | 2026-03-09 00:55:12 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:15.137331 | orchestrator | 2026-03-09 00:55:15 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:55:15.137418 | orchestrator | 2026-03-09 00:55:15 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:55:15.137453 | orchestrator | 2026-03-09 00:55:15 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:55:15.137461 | orchestrator | 2026-03-09 00:55:15 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:18.247708 | orchestrator | 2026-03-09 00:55:18 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:55:18.248896 | orchestrator | 2026-03-09 00:55:18 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:55:18.250270 | orchestrator | 2026-03-09 00:55:18 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:55:18.250314 | orchestrator | 2026-03-09 00:55:18 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:21.294551 | orchestrator | 2026-03-09 00:55:21 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:55:21.296190 | orchestrator | 2026-03-09 00:55:21 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:55:21.298487 | orchestrator | 2026-03-09 00:55:21 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:55:21.298538 | orchestrator | 2026-03-09 00:55:21 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:24.337595 | orchestrator | 2026-03-09 00:55:24 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:55:24.337703 | orchestrator | 2026-03-09 00:55:24 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:55:24.337720 | orchestrator | 2026-03-09 00:55:24 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:55:24.337732 | orchestrator | 2026-03-09 00:55:24 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:27.375696 | orchestrator | 2026-03-09 00:55:27 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:55:27.377180 | orchestrator | 2026-03-09 00:55:27 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:55:27.378501 | orchestrator | 2026-03-09 00:55:27 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:55:27.379518 | orchestrator | 2026-03-09 00:55:27 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:30.424971 | orchestrator | 2026-03-09 00:55:30 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:55:30.425835 | orchestrator | 2026-03-09 00:55:30 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:55:30.427311 | orchestrator | 2026-03-09 00:55:30 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:55:30.427648 | orchestrator | 2026-03-09 00:55:30 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:33.463425 | orchestrator | 2026-03-09 00:55:33 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:55:33.464871 | orchestrator | 2026-03-09 00:55:33 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:55:33.466753 | orchestrator | 2026-03-09 00:55:33 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:55:33.466811 | orchestrator | 2026-03-09 00:55:33 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:36.505967 | orchestrator | 2026-03-09 00:55:36 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:55:36.506237 | orchestrator | 2026-03-09 00:55:36 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:55:36.507090 | orchestrator | 2026-03-09 00:55:36 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state STARTED 2026-03-09 00:55:36.507171 | orchestrator | 2026-03-09 00:55:36 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:39.554484 | orchestrator | 2026-03-09 00:55:39 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:55:39.556759 | orchestrator | 2026-03-09 00:55:39 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:55:39.564200 | orchestrator | 2026-03-09 00:55:39 | INFO  | Task 6269c415-0332-4443-bf36-4d009a76b71c is in state SUCCESS 2026-03-09 00:55:39.565102 | orchestrator | 2026-03-09 00:55:39.565146 | orchestrator | 2026-03-09 00:55:39.565152 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 00:55:39.565158 | orchestrator | 2026-03-09 00:55:39.565163 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 00:55:39.565168 | orchestrator | Monday 09 March 2026 00:51:57 +0000 (0:00:00.231) 0:00:00.231 ********** 2026-03-09 00:55:39.565173 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:55:39.565179 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:55:39.565184 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:55:39.565189 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:39.565193 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:39.565198 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:39.565202 | orchestrator | 2026-03-09 00:55:39.565207 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 00:55:39.565212 | orchestrator | Monday 09 March 2026 00:51:58 +0000 (0:00:01.397) 0:00:01.628 ********** 2026-03-09 00:55:39.565217 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-09 00:55:39.565222 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-09 00:55:39.565226 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-09 00:55:39.565231 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-09 00:55:39.565235 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-09 00:55:39.565240 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-09 00:55:39.565244 | orchestrator | 2026-03-09 00:55:39.565249 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-09 00:55:39.565253 | orchestrator | 2026-03-09 00:55:39.565258 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-09 00:55:39.565263 | orchestrator | Monday 09 March 2026 00:51:59 +0000 (0:00:01.313) 0:00:02.942 ********** 2026-03-09 00:55:39.565268 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:55:39.565275 | orchestrator | 2026-03-09 00:55:39.565280 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-09 00:55:39.565285 | orchestrator | Monday 09 March 2026 00:52:01 +0000 (0:00:01.658) 0:00:04.600 ********** 2026-03-09 00:55:39.565291 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.565297 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.565302 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.565346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.565404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.565413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.565418 | orchestrator | 2026-03-09 00:55:39.565433 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-09 00:55:39.565439 | orchestrator | Monday 09 March 2026 00:52:02 +0000 (0:00:01.571) 0:00:06.172 ********** 2026-03-09 00:55:39.565444 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.565452 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.565460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.565468 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.565474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.565493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.565503 | orchestrator | 2026-03-09 00:55:39.565511 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-09 00:55:39.565523 | orchestrator | Monday 09 March 2026 00:52:05 +0000 (0:00:02.140) 0:00:08.313 ********** 2026-03-09 00:55:39.565607 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.565613 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.565624 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.565630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.565634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.565639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.565647 | orchestrator | 2026-03-09 00:55:39.565654 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-09 00:55:39.565662 | orchestrator | Monday 09 March 2026 00:52:07 +0000 (0:00:02.112) 0:00:10.425 ********** 2026-03-09 00:55:39.565669 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.565683 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.565692 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.565702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.565709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.565716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.565723 | orchestrator | 2026-03-09 00:55:39.565735 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-03-09 00:55:39.565743 | orchestrator | Monday 09 March 2026 00:52:08 +0000 (0:00:01.670) 0:00:12.096 ********** 2026-03-09 00:55:39.565750 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.565758 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.565765 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.565773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.565788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.565797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.565816 | orchestrator | 2026-03-09 00:55:39.565832 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-03-09 00:55:39.565845 | orchestrator | Monday 09 March 2026 00:52:10 +0000 (0:00:01.972) 0:00:14.068 ********** 2026-03-09 00:55:39.565852 | orchestrator | changed: [testbed-node-3] => { 2026-03-09 00:55:39.565861 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:55:39.565868 | orchestrator | } 2026-03-09 00:55:39.565876 | orchestrator | changed: [testbed-node-4] => { 2026-03-09 00:55:39.565883 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:55:39.565891 | orchestrator | } 2026-03-09 00:55:39.565898 | orchestrator | changed: [testbed-node-5] => { 2026-03-09 00:55:39.565906 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:55:39.565913 | orchestrator | } 2026-03-09 00:55:39.565921 | orchestrator | changed: [testbed-node-0] => { 2026-03-09 00:55:39.565928 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:55:39.565936 | orchestrator | } 2026-03-09 00:55:39.565945 | orchestrator | changed: [testbed-node-1] => { 2026-03-09 00:55:39.565951 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:55:39.565958 | orchestrator | } 2026-03-09 00:55:39.565965 | orchestrator | changed: [testbed-node-2] => { 2026-03-09 00:55:39.565972 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:55:39.565979 | orchestrator | } 2026-03-09 00:55:39.565987 | orchestrator | 2026-03-09 00:55:39.566134 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-09 00:55:39.566142 | orchestrator | Monday 09 March 2026 00:52:11 +0000 (0:00:00.800) 0:00:14.868 ********** 2026-03-09 00:55:39.566147 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.566160 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.566165 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:55:39.566171 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.566182 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:55:39.566188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.566192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.566213 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:55:39.566219 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:39.566223 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:39.566228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.566233 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:39.566238 | orchestrator | 2026-03-09 00:55:39.566242 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-09 00:55:39.566247 | orchestrator | Monday 09 March 2026 00:52:12 +0000 (0:00:01.271) 0:00:16.140 ********** 2026-03-09 00:55:39.566252 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:55:39.566257 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:55:39.566261 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:55:39.566266 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:39.566270 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:39.566275 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:39.566279 | orchestrator | 2026-03-09 00:55:39.566284 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-09 00:55:39.566292 | orchestrator | Monday 09 March 2026 00:52:17 +0000 (0:00:04.086) 0:00:20.226 ********** 2026-03-09 00:55:39.566297 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-09 00:55:39.566302 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-09 00:55:39.566307 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-09 00:55:39.566311 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-09 00:55:39.566316 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-09 00:55:39.566320 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-09 00:55:39.566325 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-09 00:55:39.566330 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-09 00:55:39.566335 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-09 00:55:39.566339 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-09 00:55:39.566348 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-09 00:55:39.566353 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-09 00:55:39.566369 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-09 00:55:39.566376 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-09 00:55:39.566380 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-09 00:55:39.566385 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-09 00:55:39.566390 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-09 00:55:39.566394 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-09 00:55:39.566399 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-09 00:55:39.566405 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-09 00:55:39.566410 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-09 00:55:39.566414 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-09 00:55:39.566419 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-09 00:55:39.566423 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-09 00:55:39.566428 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-09 00:55:39.566433 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-09 00:55:39.566437 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-09 00:55:39.566442 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-09 00:55:39.566446 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-09 00:55:39.566451 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-09 00:55:39.566456 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-09 00:55:39.566460 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-09 00:55:39.566465 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-09 00:55:39.566469 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-09 00:55:39.566474 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-09 00:55:39.566478 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-09 00:55:39.566483 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-09 00:55:39.566491 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-09 00:55:39.566496 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-09 00:55:39.566504 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-09 00:55:39.566508 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-09 00:55:39.566513 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-09 00:55:39.566517 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-09 00:55:39.566524 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-09 00:55:39.566529 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-09 00:55:39.566534 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-09 00:55:39.566538 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-09 00:55:39.566547 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-09 00:55:39.566552 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-09 00:55:39.566556 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-09 00:55:39.566561 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-09 00:55:39.566566 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-09 00:55:39.566571 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-09 00:55:39.566575 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-09 00:55:39.566580 | orchestrator | 2026-03-09 00:55:39.566585 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-09 00:55:39.566589 | orchestrator | Monday 09 March 2026 00:52:39 +0000 (0:00:22.315) 0:00:42.541 ********** 2026-03-09 00:55:39.566594 | orchestrator | 2026-03-09 00:55:39.566598 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-09 00:55:39.566603 | orchestrator | Monday 09 March 2026 00:52:39 +0000 (0:00:00.117) 0:00:42.659 ********** 2026-03-09 00:55:39.566607 | orchestrator | 2026-03-09 00:55:39.566612 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-09 00:55:39.566617 | orchestrator | Monday 09 March 2026 00:52:39 +0000 (0:00:00.115) 0:00:42.775 ********** 2026-03-09 00:55:39.566621 | orchestrator | 2026-03-09 00:55:39.566626 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-09 00:55:39.566631 | orchestrator | Monday 09 March 2026 00:52:39 +0000 (0:00:00.122) 0:00:42.897 ********** 2026-03-09 00:55:39.566635 | orchestrator | 2026-03-09 00:55:39.566639 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-09 00:55:39.566644 | orchestrator | Monday 09 March 2026 00:52:39 +0000 (0:00:00.116) 0:00:43.014 ********** 2026-03-09 00:55:39.566648 | orchestrator | 2026-03-09 00:55:39.566653 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-09 00:55:39.566658 | orchestrator | Monday 09 March 2026 00:52:39 +0000 (0:00:00.125) 0:00:43.139 ********** 2026-03-09 00:55:39.566662 | orchestrator | 2026-03-09 00:55:39.566667 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-03-09 00:55:39.566675 | orchestrator | Monday 09 March 2026 00:52:40 +0000 (0:00:00.120) 0:00:43.260 ********** 2026-03-09 00:55:39.566680 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:55:39.566685 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:55:39.566690 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:55:39.566694 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:39.566699 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:39.566704 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:39.566708 | orchestrator | 2026-03-09 00:55:39.566713 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-09 00:55:39.566717 | orchestrator | Monday 09 March 2026 00:52:42 +0000 (0:00:02.199) 0:00:45.460 ********** 2026-03-09 00:55:39.566722 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:39.566727 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:55:39.566731 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:55:39.566736 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:39.566740 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:39.566745 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:55:39.566749 | orchestrator | 2026-03-09 00:55:39.566754 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-09 00:55:39.566759 | orchestrator | 2026-03-09 00:55:39.566763 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-09 00:55:39.566768 | orchestrator | Monday 09 March 2026 00:52:50 +0000 (0:00:08.053) 0:00:53.513 ********** 2026-03-09 00:55:39.566775 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:55:39.566780 | orchestrator | 2026-03-09 00:55:39.566785 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-09 00:55:39.566789 | orchestrator | Monday 09 March 2026 00:52:50 +0000 (0:00:00.483) 0:00:53.997 ********** 2026-03-09 00:55:39.566794 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:55:39.566799 | orchestrator | 2026-03-09 00:55:39.566803 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-09 00:55:39.566856 | orchestrator | Monday 09 March 2026 00:52:51 +0000 (0:00:00.632) 0:00:54.630 ********** 2026-03-09 00:55:39.566864 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:39.566872 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:39.566879 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:39.566885 | orchestrator | 2026-03-09 00:55:39.566892 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-09 00:55:39.566898 | orchestrator | Monday 09 March 2026 00:52:52 +0000 (0:00:00.803) 0:00:55.433 ********** 2026-03-09 00:55:39.566905 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:39.566912 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:39.566919 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:39.566925 | orchestrator | 2026-03-09 00:55:39.566933 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-09 00:55:39.566940 | orchestrator | Monday 09 March 2026 00:52:52 +0000 (0:00:00.297) 0:00:55.731 ********** 2026-03-09 00:55:39.566947 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:39.566953 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:39.566961 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:39.566968 | orchestrator | 2026-03-09 00:55:39.566975 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-09 00:55:39.566988 | orchestrator | Monday 09 March 2026 00:52:52 +0000 (0:00:00.434) 0:00:56.166 ********** 2026-03-09 00:55:39.567011 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:39.567019 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:39.567027 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:39.567034 | orchestrator | 2026-03-09 00:55:39.567041 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-09 00:55:39.567049 | orchestrator | Monday 09 March 2026 00:52:53 +0000 (0:00:00.301) 0:00:56.467 ********** 2026-03-09 00:55:39.567056 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:39.567073 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:39.567078 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:39.567083 | orchestrator | 2026-03-09 00:55:39.567096 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-09 00:55:39.567101 | orchestrator | Monday 09 March 2026 00:52:53 +0000 (0:00:00.338) 0:00:56.805 ********** 2026-03-09 00:55:39.567105 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:39.567110 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:39.567114 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:39.567119 | orchestrator | 2026-03-09 00:55:39.567124 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-09 00:55:39.567128 | orchestrator | Monday 09 March 2026 00:52:53 +0000 (0:00:00.329) 0:00:57.135 ********** 2026-03-09 00:55:39.567133 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:39.567137 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:39.567142 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:39.567147 | orchestrator | 2026-03-09 00:55:39.567151 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-09 00:55:39.567156 | orchestrator | Monday 09 March 2026 00:52:54 +0000 (0:00:00.505) 0:00:57.640 ********** 2026-03-09 00:55:39.567160 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:39.567165 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:39.567169 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:39.567174 | orchestrator | 2026-03-09 00:55:39.567179 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-09 00:55:39.567183 | orchestrator | Monday 09 March 2026 00:52:54 +0000 (0:00:00.282) 0:00:57.923 ********** 2026-03-09 00:55:39.567188 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:39.567192 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:39.567197 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:39.567201 | orchestrator | 2026-03-09 00:55:39.567206 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-09 00:55:39.567211 | orchestrator | Monday 09 March 2026 00:52:55 +0000 (0:00:00.302) 0:00:58.225 ********** 2026-03-09 00:55:39.567215 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:39.567220 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:39.567224 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:39.567229 | orchestrator | 2026-03-09 00:55:39.567234 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-09 00:55:39.567238 | orchestrator | Monday 09 March 2026 00:52:55 +0000 (0:00:00.318) 0:00:58.544 ********** 2026-03-09 00:55:39.567243 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:39.567247 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:39.567252 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:39.567257 | orchestrator | 2026-03-09 00:55:39.567261 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-09 00:55:39.567266 | orchestrator | Monday 09 March 2026 00:52:55 +0000 (0:00:00.418) 0:00:58.962 ********** 2026-03-09 00:55:39.567270 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:39.567275 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:39.567279 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:39.567284 | orchestrator | 2026-03-09 00:55:39.567288 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-09 00:55:39.567293 | orchestrator | Monday 09 March 2026 00:52:56 +0000 (0:00:00.298) 0:00:59.260 ********** 2026-03-09 00:55:39.567297 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:39.567302 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:39.567306 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:39.567311 | orchestrator | 2026-03-09 00:55:39.567315 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-09 00:55:39.567320 | orchestrator | Monday 09 March 2026 00:52:56 +0000 (0:00:00.323) 0:00:59.584 ********** 2026-03-09 00:55:39.567324 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:39.567332 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:39.567345 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:39.567349 | orchestrator | 2026-03-09 00:55:39.567354 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-09 00:55:39.567359 | orchestrator | Monday 09 March 2026 00:52:56 +0000 (0:00:00.282) 0:00:59.866 ********** 2026-03-09 00:55:39.567363 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:39.567368 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:39.567372 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:39.567377 | orchestrator | 2026-03-09 00:55:39.567382 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-09 00:55:39.567386 | orchestrator | Monday 09 March 2026 00:52:56 +0000 (0:00:00.276) 0:01:00.143 ********** 2026-03-09 00:55:39.567391 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:39.567395 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:39.567400 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:39.567405 | orchestrator | 2026-03-09 00:55:39.567409 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-09 00:55:39.567414 | orchestrator | Monday 09 March 2026 00:52:57 +0000 (0:00:00.449) 0:01:00.593 ********** 2026-03-09 00:55:39.567419 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:39.567423 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:39.567428 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:39.567433 | orchestrator | 2026-03-09 00:55:39.567437 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-09 00:55:39.567442 | orchestrator | Monday 09 March 2026 00:52:57 +0000 (0:00:00.307) 0:01:00.900 ********** 2026-03-09 00:55:39.567446 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:55:39.567451 | orchestrator | 2026-03-09 00:55:39.567460 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-03-09 00:55:39.567465 | orchestrator | Monday 09 March 2026 00:52:58 +0000 (0:00:00.559) 0:01:01.459 ********** 2026-03-09 00:55:39.567470 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:39.567474 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:39.567479 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:39.567484 | orchestrator | 2026-03-09 00:55:39.567488 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-03-09 00:55:39.567493 | orchestrator | Monday 09 March 2026 00:52:58 +0000 (0:00:00.703) 0:01:02.163 ********** 2026-03-09 00:55:39.567498 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:39.567502 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:39.567507 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:39.567511 | orchestrator | 2026-03-09 00:55:39.567516 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-03-09 00:55:39.567520 | orchestrator | Monday 09 March 2026 00:52:59 +0000 (0:00:00.869) 0:01:03.033 ********** 2026-03-09 00:55:39.567525 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:39.567530 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:39.567534 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:39.567539 | orchestrator | 2026-03-09 00:55:39.567543 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-03-09 00:55:39.567548 | orchestrator | Monday 09 March 2026 00:53:00 +0000 (0:00:00.456) 0:01:03.489 ********** 2026-03-09 00:55:39.567553 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:39.567557 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:39.567562 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:39.567566 | orchestrator | 2026-03-09 00:55:39.567571 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-03-09 00:55:39.567575 | orchestrator | Monday 09 March 2026 00:53:00 +0000 (0:00:00.628) 0:01:04.118 ********** 2026-03-09 00:55:39.567580 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:39.567584 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:39.567589 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:39.567593 | orchestrator | 2026-03-09 00:55:39.567602 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-03-09 00:55:39.567607 | orchestrator | Monday 09 March 2026 00:53:01 +0000 (0:00:00.959) 0:01:05.077 ********** 2026-03-09 00:55:39.567611 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:39.567616 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:39.567621 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:39.567625 | orchestrator | 2026-03-09 00:55:39.567630 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-03-09 00:55:39.567635 | orchestrator | Monday 09 March 2026 00:53:02 +0000 (0:00:00.496) 0:01:05.573 ********** 2026-03-09 00:55:39.567639 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:39.567644 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:39.567648 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:39.567653 | orchestrator | 2026-03-09 00:55:39.567657 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-03-09 00:55:39.567662 | orchestrator | Monday 09 March 2026 00:53:02 +0000 (0:00:00.452) 0:01:06.026 ********** 2026-03-09 00:55:39.567667 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:39.567671 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:39.567676 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:39.567680 | orchestrator | 2026-03-09 00:55:39.567685 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-09 00:55:39.567689 | orchestrator | Monday 09 March 2026 00:53:03 +0000 (0:00:00.420) 0:01:06.447 ********** 2026-03-09 00:55:39.567696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.567707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.567712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.567722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.567727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.567736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.567742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.567747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.567752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.567760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.567765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.567775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.567780 | orchestrator | 2026-03-09 00:55:39.567785 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-09 00:55:39.567790 | orchestrator | Monday 09 March 2026 00:53:06 +0000 (0:00:03.464) 0:01:09.911 ********** 2026-03-09 00:55:39.567798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.567803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.567808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.567813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.567821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.567826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.567831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.567840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.567849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.567854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.567859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.567864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.567868 | orchestrator | 2026-03-09 00:55:39.567873 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-03-09 00:55:39.567877 | orchestrator | Monday 09 March 2026 00:53:11 +0000 (0:00:05.247) 0:01:15.159 ********** 2026-03-09 00:55:39.567882 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-03-09 00:55:39.567887 | orchestrator | 2026-03-09 00:55:39.567894 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-03-09 00:55:39.567902 | orchestrator | Monday 09 March 2026 00:53:12 +0000 (0:00:00.690) 0:01:15.849 ********** 2026-03-09 00:55:39.567910 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:39.567919 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:39.567925 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:39.567929 | orchestrator | 2026-03-09 00:55:39.567934 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-03-09 00:55:39.567939 | orchestrator | Monday 09 March 2026 00:53:13 +0000 (0:00:00.888) 0:01:16.737 ********** 2026-03-09 00:55:39.567943 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:39.567948 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:39.567952 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:39.567957 | orchestrator | 2026-03-09 00:55:39.567962 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-03-09 00:55:39.567966 | orchestrator | Monday 09 March 2026 00:53:15 +0000 (0:00:01.802) 0:01:18.540 ********** 2026-03-09 00:55:39.567971 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:39.567975 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:39.567983 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:39.567988 | orchestrator | 2026-03-09 00:55:39.568011 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-03-09 00:55:39.568017 | orchestrator | Monday 09 March 2026 00:53:17 +0000 (0:00:02.149) 0:01:20.690 ********** 2026-03-09 00:55:39.568026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.568032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.568037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.568042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.568047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.568051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.568059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.568067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.568075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.568080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.568085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.568090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.568094 | orchestrator | 2026-03-09 00:55:39.568099 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-03-09 00:55:39.568104 | orchestrator | Monday 09 March 2026 00:53:22 +0000 (0:00:04.962) 0:01:25.652 ********** 2026-03-09 00:55:39.568108 | orchestrator | changed: [testbed-node-0] => { 2026-03-09 00:55:39.568115 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:55:39.568123 | orchestrator | } 2026-03-09 00:55:39.568129 | orchestrator | changed: [testbed-node-1] => { 2026-03-09 00:55:39.568136 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:55:39.568144 | orchestrator | } 2026-03-09 00:55:39.568152 | orchestrator | changed: [testbed-node-2] => { 2026-03-09 00:55:39.568160 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:55:39.568168 | orchestrator | } 2026-03-09 00:55:39.568174 | orchestrator | 2026-03-09 00:55:39.568179 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-09 00:55:39.568183 | orchestrator | Monday 09 March 2026 00:53:22 +0000 (0:00:00.487) 0:01:26.140 ********** 2026-03-09 00:55:39.568190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.568203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.568208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.568218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.568224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.568229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.568233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.568238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.568243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.568254 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-2, testbed-node-0, testbed-node-1 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.568259 | orchestrator | 2026-03-09 00:55:39.568264 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-03-09 00:55:39.568268 | orchestrator | Monday 09 March 2026 00:53:26 +0000 (0:00:03.195) 0:01:29.335 ********** 2026-03-09 00:55:39.568273 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-03-09 00:55:39.568278 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-03-09 00:55:39.568283 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-03-09 00:55:39.568287 | orchestrator | 2026-03-09 00:55:39.568292 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-03-09 00:55:39.568296 | orchestrator | Monday 09 March 2026 00:53:27 +0000 (0:00:00.990) 0:01:30.326 ********** 2026-03-09 00:55:39.568301 | orchestrator | changed: [testbed-node-0] => { 2026-03-09 00:55:39.568305 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:55:39.568310 | orchestrator | } 2026-03-09 00:55:39.568315 | orchestrator | changed: [testbed-node-1] => { 2026-03-09 00:55:39.568319 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:55:39.568324 | orchestrator | } 2026-03-09 00:55:39.568328 | orchestrator | changed: [testbed-node-2] => { 2026-03-09 00:55:39.568333 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:55:39.568341 | orchestrator | } 2026-03-09 00:55:39.568346 | orchestrator | 2026-03-09 00:55:39.568350 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-09 00:55:39.568355 | orchestrator | Monday 09 March 2026 00:53:27 +0000 (0:00:00.770) 0:01:31.097 ********** 2026-03-09 00:55:39.568359 | orchestrator | 2026-03-09 00:55:39.568364 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-09 00:55:39.568369 | orchestrator | Monday 09 March 2026 00:53:28 +0000 (0:00:00.085) 0:01:31.182 ********** 2026-03-09 00:55:39.568373 | orchestrator | 2026-03-09 00:55:39.568378 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-09 00:55:39.568382 | orchestrator | Monday 09 March 2026 00:53:28 +0000 (0:00:00.070) 0:01:31.252 ********** 2026-03-09 00:55:39.568387 | orchestrator | 2026-03-09 00:55:39.568392 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-09 00:55:39.568396 | orchestrator | Monday 09 March 2026 00:53:28 +0000 (0:00:00.072) 0:01:31.325 ********** 2026-03-09 00:55:39.568401 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:39.568405 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:39.568410 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:39.568415 | orchestrator | 2026-03-09 00:55:39.568420 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-09 00:55:39.568424 | orchestrator | Monday 09 March 2026 00:53:38 +0000 (0:00:10.582) 0:01:41.908 ********** 2026-03-09 00:55:39.568429 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:39.568433 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:39.568437 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:39.568442 | orchestrator | 2026-03-09 00:55:39.568447 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-03-09 00:55:39.568456 | orchestrator | Monday 09 March 2026 00:53:54 +0000 (0:00:15.779) 0:01:57.688 ********** 2026-03-09 00:55:39.568460 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-03-09 00:55:39.568465 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-03-09 00:55:39.568469 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-03-09 00:55:39.568474 | orchestrator | 2026-03-09 00:55:39.568479 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-09 00:55:39.568483 | orchestrator | Monday 09 March 2026 00:54:05 +0000 (0:00:10.910) 0:02:08.598 ********** 2026-03-09 00:55:39.568488 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:39.568492 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:39.568497 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:39.568502 | orchestrator | 2026-03-09 00:55:39.568507 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-09 00:55:39.568511 | orchestrator | Monday 09 March 2026 00:54:14 +0000 (0:00:09.159) 0:02:17.757 ********** 2026-03-09 00:55:39.568516 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:39.568520 | orchestrator | 2026-03-09 00:55:39.568525 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-09 00:55:39.568529 | orchestrator | Monday 09 March 2026 00:54:14 +0000 (0:00:00.140) 0:02:17.898 ********** 2026-03-09 00:55:39.568534 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:39.568539 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:39.568543 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:39.568548 | orchestrator | 2026-03-09 00:55:39.568552 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-09 00:55:39.568557 | orchestrator | Monday 09 March 2026 00:54:15 +0000 (0:00:00.860) 0:02:18.758 ********** 2026-03-09 00:55:39.568561 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:39.568566 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:39.568571 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:39.568575 | orchestrator | 2026-03-09 00:55:39.568580 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-09 00:55:39.568585 | orchestrator | Monday 09 March 2026 00:54:16 +0000 (0:00:00.714) 0:02:19.473 ********** 2026-03-09 00:55:39.568589 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:39.568594 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:39.568598 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:39.568603 | orchestrator | 2026-03-09 00:55:39.568617 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-09 00:55:39.568622 | orchestrator | Monday 09 March 2026 00:54:17 +0000 (0:00:01.560) 0:02:21.033 ********** 2026-03-09 00:55:39.568632 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:39.568637 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:39.568645 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:39.568651 | orchestrator | 2026-03-09 00:55:39.568659 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-09 00:55:39.568667 | orchestrator | Monday 09 March 2026 00:54:18 +0000 (0:00:00.642) 0:02:21.676 ********** 2026-03-09 00:55:39.568675 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:39.568682 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:39.568689 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:39.568698 | orchestrator | 2026-03-09 00:55:39.568703 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-09 00:55:39.568708 | orchestrator | Monday 09 March 2026 00:54:19 +0000 (0:00:01.055) 0:02:22.731 ********** 2026-03-09 00:55:39.568713 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:39.568717 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:39.568721 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:39.568726 | orchestrator | 2026-03-09 00:55:39.568731 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-03-09 00:55:39.568736 | orchestrator | Monday 09 March 2026 00:54:20 +0000 (0:00:01.043) 0:02:23.775 ********** 2026-03-09 00:55:39.568741 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-03-09 00:55:39.568750 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-03-09 00:55:39.568754 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-03-09 00:55:39.568759 | orchestrator | 2026-03-09 00:55:39.568764 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-03-09 00:55:39.568769 | orchestrator | Monday 09 March 2026 00:54:21 +0000 (0:00:01.198) 0:02:24.974 ********** 2026-03-09 00:55:39.568773 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:39.568778 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:39.568783 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:39.568787 | orchestrator | 2026-03-09 00:55:39.568792 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-09 00:55:39.570307 | orchestrator | Monday 09 March 2026 00:54:22 +0000 (0:00:00.341) 0:02:25.315 ********** 2026-03-09 00:55:39.570350 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.570359 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.570365 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.570371 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.570377 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.570388 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.570401 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.570414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.570420 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.570425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.570431 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.570436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.570442 | orchestrator | 2026-03-09 00:55:39.570447 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-09 00:55:39.570452 | orchestrator | Monday 09 March 2026 00:54:26 +0000 (0:00:03.897) 0:02:29.213 ********** 2026-03-09 00:55:39.570458 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.570464 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.570473 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.570482 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.570487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.570528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.570538 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.570548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.570556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.570573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.570582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.570596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.570604 | orchestrator | 2026-03-09 00:55:39.570611 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-03-09 00:55:39.570619 | orchestrator | Monday 09 March 2026 00:54:32 +0000 (0:00:06.070) 0:02:35.283 ********** 2026-03-09 00:55:39.570627 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-03-09 00:55:39.570635 | orchestrator | 2026-03-09 00:55:39.570643 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-03-09 00:55:39.570650 | orchestrator | Monday 09 March 2026 00:54:32 +0000 (0:00:00.832) 0:02:36.116 ********** 2026-03-09 00:55:39.570658 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:39.570666 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:39.570676 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:39.570683 | orchestrator | 2026-03-09 00:55:39.570691 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-03-09 00:55:39.570699 | orchestrator | Monday 09 March 2026 00:54:33 +0000 (0:00:00.755) 0:02:36.871 ********** 2026-03-09 00:55:39.570706 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:39.570714 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:39.570722 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:39.570730 | orchestrator | 2026-03-09 00:55:39.570738 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-03-09 00:55:39.570746 | orchestrator | Monday 09 March 2026 00:54:35 +0000 (0:00:01.600) 0:02:38.471 ********** 2026-03-09 00:55:39.570755 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:39.570763 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:39.570772 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:39.570780 | orchestrator | 2026-03-09 00:55:39.570788 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-03-09 00:55:39.570796 | orchestrator | Monday 09 March 2026 00:54:37 +0000 (0:00:01.925) 0:02:40.396 ********** 2026-03-09 00:55:39.570806 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.570817 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.570826 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.570831 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.570837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.570852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.570860 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.570869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.570879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.570892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.570901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.570928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.570939 | orchestrator | 2026-03-09 00:55:39.570948 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-03-09 00:55:39.570957 | orchestrator | Monday 09 March 2026 00:54:41 +0000 (0:00:04.692) 0:02:45.089 ********** 2026-03-09 00:55:39.570964 | orchestrator | ok: [testbed-node-0] => { 2026-03-09 00:55:39.570971 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:55:39.570976 | orchestrator | } 2026-03-09 00:55:39.570983 | orchestrator | changed: [testbed-node-1] => { 2026-03-09 00:55:39.570989 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:55:39.571008 | orchestrator | } 2026-03-09 00:55:39.571014 | orchestrator | changed: [testbed-node-2] => { 2026-03-09 00:55:39.571020 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:55:39.571026 | orchestrator | } 2026-03-09 00:55:39.571032 | orchestrator | 2026-03-09 00:55:39.571037 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-09 00:55:39.571043 | orchestrator | Monday 09 March 2026 00:54:42 +0000 (0:00:00.622) 0:02:45.711 ********** 2026-03-09 00:55:39.571056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.571063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.571069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.571080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.571086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.571095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.571101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.571107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.571118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:39.571124 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-2, testbed-node-1 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:39.571134 | orchestrator | 2026-03-09 00:55:39.571139 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-03-09 00:55:39.571145 | orchestrator | Monday 09 March 2026 00:54:44 +0000 (0:00:02.186) 0:02:47.898 ********** 2026-03-09 00:55:39.571151 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-03-09 00:55:39.571157 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-03-09 00:55:39.571163 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-03-09 00:55:39.571169 | orchestrator | 2026-03-09 00:55:39.571176 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-03-09 00:55:39.571181 | orchestrator | Monday 09 March 2026 00:54:46 +0000 (0:00:01.289) 0:02:49.188 ********** 2026-03-09 00:55:39.571187 | orchestrator | changed: [testbed-node-0] => { 2026-03-09 00:55:39.571194 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:55:39.571199 | orchestrator | } 2026-03-09 00:55:39.571205 | orchestrator | changed: [testbed-node-1] => { 2026-03-09 00:55:39.571211 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:55:39.571218 | orchestrator | } 2026-03-09 00:55:39.571224 | orchestrator | changed: [testbed-node-2] => { 2026-03-09 00:55:39.571230 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:55:39.571236 | orchestrator | } 2026-03-09 00:55:39.571241 | orchestrator | 2026-03-09 00:55:39.571247 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-09 00:55:39.571253 | orchestrator | Monday 09 March 2026 00:54:46 +0000 (0:00:00.700) 0:02:49.888 ********** 2026-03-09 00:55:39.571260 | orchestrator | 2026-03-09 00:55:39.571266 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-09 00:55:39.571272 | orchestrator | Monday 09 March 2026 00:54:46 +0000 (0:00:00.148) 0:02:50.036 ********** 2026-03-09 00:55:39.571278 | orchestrator | 2026-03-09 00:55:39.571284 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-09 00:55:39.571289 | orchestrator | Monday 09 March 2026 00:54:47 +0000 (0:00:00.188) 0:02:50.225 ********** 2026-03-09 00:55:39.571294 | orchestrator | 2026-03-09 00:55:39.571300 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-09 00:55:39.571305 | orchestrator | Monday 09 March 2026 00:54:47 +0000 (0:00:00.166) 0:02:50.391 ********** 2026-03-09 00:55:39.571310 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:39.571315 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:39.571320 | orchestrator | 2026-03-09 00:55:39.571326 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-09 00:55:39.571334 | orchestrator | Monday 09 March 2026 00:55:01 +0000 (0:00:13.928) 0:03:04.320 ********** 2026-03-09 00:55:39.571342 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:39.571351 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:39.571359 | orchestrator | 2026-03-09 00:55:39.571366 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-03-09 00:55:39.571375 | orchestrator | Monday 09 March 2026 00:55:14 +0000 (0:00:13.770) 0:03:18.091 ********** 2026-03-09 00:55:39.571387 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-03-09 00:55:39.571396 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-03-09 00:55:39.571404 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-03-09 00:55:39.571412 | orchestrator | 2026-03-09 00:55:39.571421 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-09 00:55:39.571428 | orchestrator | Monday 09 March 2026 00:55:30 +0000 (0:00:15.594) 0:03:33.686 ********** 2026-03-09 00:55:39.571437 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:39.571443 | orchestrator | 2026-03-09 00:55:39.571449 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-09 00:55:39.571454 | orchestrator | Monday 09 March 2026 00:55:30 +0000 (0:00:00.284) 0:03:33.970 ********** 2026-03-09 00:55:39.571463 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:39.571468 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:39.571473 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:39.571478 | orchestrator | 2026-03-09 00:55:39.571483 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-09 00:55:39.571489 | orchestrator | Monday 09 March 2026 00:55:31 +0000 (0:00:00.908) 0:03:34.878 ********** 2026-03-09 00:55:39.571493 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:39.571499 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:39.571504 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:39.571509 | orchestrator | 2026-03-09 00:55:39.571514 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-09 00:55:39.571519 | orchestrator | Monday 09 March 2026 00:55:32 +0000 (0:00:00.942) 0:03:35.821 ********** 2026-03-09 00:55:39.571524 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:39.571529 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:39.571534 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:39.571539 | orchestrator | 2026-03-09 00:55:39.571544 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-09 00:55:39.571553 | orchestrator | Monday 09 March 2026 00:55:33 +0000 (0:00:00.918) 0:03:36.739 ********** 2026-03-09 00:55:39.571558 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:39.571563 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:39.571568 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:39.571573 | orchestrator | 2026-03-09 00:55:39.571578 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-09 00:55:39.571583 | orchestrator | Monday 09 March 2026 00:55:34 +0000 (0:00:00.632) 0:03:37.371 ********** 2026-03-09 00:55:39.571588 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:39.571594 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:39.571599 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:39.571604 | orchestrator | 2026-03-09 00:55:39.571609 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-09 00:55:39.571614 | orchestrator | Monday 09 March 2026 00:55:35 +0000 (0:00:00.816) 0:03:38.188 ********** 2026-03-09 00:55:39.571619 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:39.571624 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:39.571629 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:39.571634 | orchestrator | 2026-03-09 00:55:39.571639 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-03-09 00:55:39.571644 | orchestrator | Monday 09 March 2026 00:55:35 +0000 (0:00:00.802) 0:03:38.991 ********** 2026-03-09 00:55:39.571649 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-03-09 00:55:39.571654 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-03-09 00:55:39.571659 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-03-09 00:55:39.571664 | orchestrator | 2026-03-09 00:55:39.571669 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:55:39.571675 | orchestrator | testbed-node-0 : ok=65  changed=29  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-09 00:55:39.571681 | orchestrator | testbed-node-1 : ok=63  changed=30  unreachable=0 failed=0 skipped=23  rescued=0 ignored=0 2026-03-09 00:55:39.571686 | orchestrator | testbed-node-2 : ok=63  changed=30  unreachable=0 failed=0 skipped=23  rescued=0 ignored=0 2026-03-09 00:55:39.571691 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:55:39.571696 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:55:39.571701 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:55:39.571710 | orchestrator | 2026-03-09 00:55:39.571715 | orchestrator | 2026-03-09 00:55:39.571720 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:55:39.571725 | orchestrator | Monday 09 March 2026 00:55:37 +0000 (0:00:01.257) 0:03:40.248 ********** 2026-03-09 00:55:39.571730 | orchestrator | =============================================================================== 2026-03-09 00:55:39.571735 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 29.55s 2026-03-09 00:55:39.571740 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 26.50s 2026-03-09 00:55:39.571745 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 24.51s 2026-03-09 00:55:39.571750 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 22.32s 2026-03-09 00:55:39.571755 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 9.16s 2026-03-09 00:55:39.571760 | orchestrator | ovn-controller : Restart ovn-controller container ----------------------- 8.05s 2026-03-09 00:55:39.571765 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 6.07s 2026-03-09 00:55:39.571773 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.25s 2026-03-09 00:55:39.571778 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 4.96s 2026-03-09 00:55:39.571783 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 4.69s 2026-03-09 00:55:39.571788 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 4.09s 2026-03-09 00:55:39.571793 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 3.90s 2026-03-09 00:55:39.571798 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 3.46s 2026-03-09 00:55:39.571803 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.20s 2026-03-09 00:55:39.571808 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.20s 2026-03-09 00:55:39.571813 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.19s 2026-03-09 00:55:39.571818 | orchestrator | ovn-db : Generate config files for OVN relay services ------------------- 2.15s 2026-03-09 00:55:39.571823 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.14s 2026-03-09 00:55:39.571828 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 2.11s 2026-03-09 00:55:39.571833 | orchestrator | service-check-containers : ovn_controller | Check containers ------------ 1.97s 2026-03-09 00:55:39.571839 | orchestrator | 2026-03-09 00:55:39 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:42.610332 | orchestrator | 2026-03-09 00:55:42 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:55:42.616340 | orchestrator | 2026-03-09 00:55:42 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:55:42.616586 | orchestrator | 2026-03-09 00:55:42 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:45.671281 | orchestrator | 2026-03-09 00:55:45 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:55:45.673750 | orchestrator | 2026-03-09 00:55:45 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:55:45.673830 | orchestrator | 2026-03-09 00:55:45 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:48.716756 | orchestrator | 2026-03-09 00:55:48 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:55:48.717625 | orchestrator | 2026-03-09 00:55:48 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:55:48.717655 | orchestrator | 2026-03-09 00:55:48 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:51.761258 | orchestrator | 2026-03-09 00:55:51 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:55:51.762594 | orchestrator | 2026-03-09 00:55:51 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:55:51.762651 | orchestrator | 2026-03-09 00:55:51 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:54.807789 | orchestrator | 2026-03-09 00:55:54 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:55:54.810368 | orchestrator | 2026-03-09 00:55:54 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:55:54.810503 | orchestrator | 2026-03-09 00:55:54 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:57.858009 | orchestrator | 2026-03-09 00:55:57 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:55:57.859718 | orchestrator | 2026-03-09 00:55:57 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:55:57.859752 | orchestrator | 2026-03-09 00:55:57 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:00.899184 | orchestrator | 2026-03-09 00:56:00 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:56:00.900215 | orchestrator | 2026-03-09 00:56:00 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:56:00.900245 | orchestrator | 2026-03-09 00:56:00 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:03.960103 | orchestrator | 2026-03-09 00:56:03 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:56:03.961953 | orchestrator | 2026-03-09 00:56:03 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:56:03.962112 | orchestrator | 2026-03-09 00:56:03 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:07.016768 | orchestrator | 2026-03-09 00:56:07 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:56:07.017159 | orchestrator | 2026-03-09 00:56:07 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:56:07.017232 | orchestrator | 2026-03-09 00:56:07 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:10.064926 | orchestrator | 2026-03-09 00:56:10 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:56:10.065927 | orchestrator | 2026-03-09 00:56:10 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:56:10.065987 | orchestrator | 2026-03-09 00:56:10 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:13.103975 | orchestrator | 2026-03-09 00:56:13 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:56:13.105266 | orchestrator | 2026-03-09 00:56:13 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:56:13.105287 | orchestrator | 2026-03-09 00:56:13 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:16.144464 | orchestrator | 2026-03-09 00:56:16 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:56:16.148814 | orchestrator | 2026-03-09 00:56:16 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:56:16.148866 | orchestrator | 2026-03-09 00:56:16 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:19.190102 | orchestrator | 2026-03-09 00:56:19 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:56:19.191215 | orchestrator | 2026-03-09 00:56:19 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:56:19.191261 | orchestrator | 2026-03-09 00:56:19 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:22.232822 | orchestrator | 2026-03-09 00:56:22 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:56:22.235101 | orchestrator | 2026-03-09 00:56:22 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:56:22.235149 | orchestrator | 2026-03-09 00:56:22 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:25.285530 | orchestrator | 2026-03-09 00:56:25 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:56:25.287201 | orchestrator | 2026-03-09 00:56:25 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:56:25.287332 | orchestrator | 2026-03-09 00:56:25 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:28.336201 | orchestrator | 2026-03-09 00:56:28 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:56:28.337480 | orchestrator | 2026-03-09 00:56:28 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:56:28.337530 | orchestrator | 2026-03-09 00:56:28 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:31.387361 | orchestrator | 2026-03-09 00:56:31 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:56:31.389193 | orchestrator | 2026-03-09 00:56:31 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:56:31.389228 | orchestrator | 2026-03-09 00:56:31 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:34.433697 | orchestrator | 2026-03-09 00:56:34 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:56:34.438054 | orchestrator | 2026-03-09 00:56:34 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:56:34.438132 | orchestrator | 2026-03-09 00:56:34 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:37.489387 | orchestrator | 2026-03-09 00:56:37 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:56:37.491376 | orchestrator | 2026-03-09 00:56:37 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:56:37.491920 | orchestrator | 2026-03-09 00:56:37 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:40.545451 | orchestrator | 2026-03-09 00:56:40 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:56:40.547631 | orchestrator | 2026-03-09 00:56:40 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:56:40.548277 | orchestrator | 2026-03-09 00:56:40 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:43.604140 | orchestrator | 2026-03-09 00:56:43 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:56:43.605735 | orchestrator | 2026-03-09 00:56:43 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:56:43.606660 | orchestrator | 2026-03-09 00:56:43 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:46.658439 | orchestrator | 2026-03-09 00:56:46 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:56:46.660614 | orchestrator | 2026-03-09 00:56:46 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:56:46.660711 | orchestrator | 2026-03-09 00:56:46 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:49.712696 | orchestrator | 2026-03-09 00:56:49 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:56:49.713790 | orchestrator | 2026-03-09 00:56:49 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:56:49.714100 | orchestrator | 2026-03-09 00:56:49 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:52.753323 | orchestrator | 2026-03-09 00:56:52 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:56:52.755240 | orchestrator | 2026-03-09 00:56:52 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:56:52.755407 | orchestrator | 2026-03-09 00:56:52 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:55.796457 | orchestrator | 2026-03-09 00:56:55 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:56:55.797279 | orchestrator | 2026-03-09 00:56:55 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:56:55.797336 | orchestrator | 2026-03-09 00:56:55 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:58.836867 | orchestrator | 2026-03-09 00:56:58 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:56:58.839720 | orchestrator | 2026-03-09 00:56:58 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:56:58.839774 | orchestrator | 2026-03-09 00:56:58 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:01.892728 | orchestrator | 2026-03-09 00:57:01 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:57:01.893588 | orchestrator | 2026-03-09 00:57:01 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:57:01.893825 | orchestrator | 2026-03-09 00:57:01 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:04.939087 | orchestrator | 2026-03-09 00:57:04 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:57:04.940272 | orchestrator | 2026-03-09 00:57:04 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:57:04.940322 | orchestrator | 2026-03-09 00:57:04 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:08.041252 | orchestrator | 2026-03-09 00:57:08 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:57:08.043646 | orchestrator | 2026-03-09 00:57:08 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:57:08.043725 | orchestrator | 2026-03-09 00:57:08 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:11.092392 | orchestrator | 2026-03-09 00:57:11 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:57:11.093817 | orchestrator | 2026-03-09 00:57:11 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:57:11.093870 | orchestrator | 2026-03-09 00:57:11 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:14.142064 | orchestrator | 2026-03-09 00:57:14 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:57:14.143224 | orchestrator | 2026-03-09 00:57:14 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:57:14.143291 | orchestrator | 2026-03-09 00:57:14 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:17.194090 | orchestrator | 2026-03-09 00:57:17 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:57:17.194649 | orchestrator | 2026-03-09 00:57:17 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:57:17.194684 | orchestrator | 2026-03-09 00:57:17 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:20.240233 | orchestrator | 2026-03-09 00:57:20 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:57:20.242589 | orchestrator | 2026-03-09 00:57:20 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:57:20.242759 | orchestrator | 2026-03-09 00:57:20 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:23.275550 | orchestrator | 2026-03-09 00:57:23 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:57:23.277189 | orchestrator | 2026-03-09 00:57:23 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:57:23.277227 | orchestrator | 2026-03-09 00:57:23 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:26.338271 | orchestrator | 2026-03-09 00:57:26 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:57:26.341829 | orchestrator | 2026-03-09 00:57:26 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:57:26.341904 | orchestrator | 2026-03-09 00:57:26 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:29.403926 | orchestrator | 2026-03-09 00:57:29 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:57:29.404012 | orchestrator | 2026-03-09 00:57:29 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:57:29.404024 | orchestrator | 2026-03-09 00:57:29 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:32.437250 | orchestrator | 2026-03-09 00:57:32 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:57:32.439254 | orchestrator | 2026-03-09 00:57:32 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:57:32.439319 | orchestrator | 2026-03-09 00:57:32 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:35.480302 | orchestrator | 2026-03-09 00:57:35 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:57:35.480905 | orchestrator | 2026-03-09 00:57:35 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:57:35.481210 | orchestrator | 2026-03-09 00:57:35 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:38.526341 | orchestrator | 2026-03-09 00:57:38 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:57:38.529453 | orchestrator | 2026-03-09 00:57:38 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:57:38.529571 | orchestrator | 2026-03-09 00:57:38 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:41.566758 | orchestrator | 2026-03-09 00:57:41 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:57:41.568029 | orchestrator | 2026-03-09 00:57:41 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:57:41.568109 | orchestrator | 2026-03-09 00:57:41 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:44.617284 | orchestrator | 2026-03-09 00:57:44 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:57:44.618099 | orchestrator | 2026-03-09 00:57:44 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:57:44.618137 | orchestrator | 2026-03-09 00:57:44 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:47.657185 | orchestrator | 2026-03-09 00:57:47 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:57:47.659160 | orchestrator | 2026-03-09 00:57:47 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:57:47.659292 | orchestrator | 2026-03-09 00:57:47 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:50.709149 | orchestrator | 2026-03-09 00:57:50 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:57:50.710865 | orchestrator | 2026-03-09 00:57:50 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:57:50.710911 | orchestrator | 2026-03-09 00:57:50 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:53.763945 | orchestrator | 2026-03-09 00:57:53 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:57:53.767205 | orchestrator | 2026-03-09 00:57:53 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state STARTED 2026-03-09 00:57:53.767257 | orchestrator | 2026-03-09 00:57:53 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:56.810416 | orchestrator | 2026-03-09 00:57:56 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 00:57:56.812396 | orchestrator | 2026-03-09 00:57:56 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:57:56.823549 | orchestrator | 2026-03-09 00:57:56 | INFO  | Task 661decfa-ea91-44a1-ad57-ace7bcc06740 is in state SUCCESS 2026-03-09 00:57:56.823984 | orchestrator | 2026-03-09 00:57:56.826296 | orchestrator | 2026-03-09 00:57:56.826391 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 00:57:56.826414 | orchestrator | 2026-03-09 00:57:56.826450 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 00:57:56.826467 | orchestrator | Monday 09 March 2026 00:50:27 +0000 (0:00:00.698) 0:00:00.698 ********** 2026-03-09 00:57:56.826543 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:56.826561 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:56.826577 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:56.826687 | orchestrator | 2026-03-09 00:57:56.826706 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 00:57:56.826722 | orchestrator | Monday 09 March 2026 00:50:27 +0000 (0:00:00.385) 0:00:01.084 ********** 2026-03-09 00:57:56.826739 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-09 00:57:56.826754 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-09 00:57:56.826769 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-09 00:57:56.826784 | orchestrator | 2026-03-09 00:57:56.826800 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-09 00:57:56.826815 | orchestrator | 2026-03-09 00:57:56.826831 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-09 00:57:56.826873 | orchestrator | Monday 09 March 2026 00:50:28 +0000 (0:00:00.773) 0:00:01.857 ********** 2026-03-09 00:57:56.826891 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:56.826907 | orchestrator | 2026-03-09 00:57:56.826923 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-09 00:57:56.826939 | orchestrator | Monday 09 March 2026 00:50:30 +0000 (0:00:01.747) 0:00:03.605 ********** 2026-03-09 00:57:56.826955 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:56.826972 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:56.826988 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:56.827005 | orchestrator | 2026-03-09 00:57:56.827021 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-09 00:57:56.827039 | orchestrator | Monday 09 March 2026 00:50:31 +0000 (0:00:01.157) 0:00:04.762 ********** 2026-03-09 00:57:56.827056 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:56.827072 | orchestrator | 2026-03-09 00:57:56.827089 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-09 00:57:56.827106 | orchestrator | Monday 09 March 2026 00:50:33 +0000 (0:00:01.948) 0:00:06.711 ********** 2026-03-09 00:57:56.827123 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:56.827168 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:56.827186 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:56.827202 | orchestrator | 2026-03-09 00:57:56.827219 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-09 00:57:56.827234 | orchestrator | Monday 09 March 2026 00:50:34 +0000 (0:00:00.778) 0:00:07.490 ********** 2026-03-09 00:57:56.827250 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-09 00:57:56.827266 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-09 00:57:56.827283 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-09 00:57:56.827297 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-09 00:57:56.827313 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-09 00:57:56.827329 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-09 00:57:56.827347 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-09 00:57:56.827361 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-09 00:57:56.827378 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-09 00:57:56.827394 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-09 00:57:56.827408 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-09 00:57:56.827423 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-09 00:57:56.827439 | orchestrator | 2026-03-09 00:57:56.827453 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-09 00:57:56.827468 | orchestrator | Monday 09 March 2026 00:50:38 +0000 (0:00:04.714) 0:00:12.205 ********** 2026-03-09 00:57:56.827484 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-09 00:57:56.827629 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-09 00:57:56.827766 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-09 00:57:56.827785 | orchestrator | 2026-03-09 00:57:56.827801 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-09 00:57:56.827818 | orchestrator | Monday 09 March 2026 00:50:40 +0000 (0:00:01.536) 0:00:13.741 ********** 2026-03-09 00:57:56.827869 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-09 00:57:56.827886 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-09 00:57:56.827901 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-09 00:57:56.827916 | orchestrator | 2026-03-09 00:57:56.827931 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-09 00:57:56.827947 | orchestrator | Monday 09 March 2026 00:50:42 +0000 (0:00:02.476) 0:00:16.218 ********** 2026-03-09 00:57:56.827962 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-09 00:57:56.827977 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.828021 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-09 00:57:56.828038 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.828054 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-09 00:57:56.828070 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.828086 | orchestrator | 2026-03-09 00:57:56.828116 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-09 00:57:56.828133 | orchestrator | Monday 09 March 2026 00:50:46 +0000 (0:00:03.337) 0:00:19.555 ********** 2026-03-09 00:57:56.828154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-09 00:57:56.828195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-09 00:57:56.828255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-09 00:57:56.828274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:57:56.828290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:57:56.828319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:57:56.828345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-09 00:57:56.828375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-09 00:57:56.828392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-09 00:57:56.828408 | orchestrator | 2026-03-09 00:57:56.828425 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-09 00:57:56.828442 | orchestrator | Monday 09 March 2026 00:50:48 +0000 (0:00:02.733) 0:00:22.289 ********** 2026-03-09 00:57:56.828494 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:56.828510 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:56.828526 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:56.828542 | orchestrator | 2026-03-09 00:57:56.828557 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-09 00:57:56.828574 | orchestrator | Monday 09 March 2026 00:50:50 +0000 (0:00:01.739) 0:00:24.029 ********** 2026-03-09 00:57:56.828590 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-03-09 00:57:56.828606 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-03-09 00:57:56.828620 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-03-09 00:57:56.828635 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-03-09 00:57:56.828650 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-03-09 00:57:56.828666 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-03-09 00:57:56.828683 | orchestrator | 2026-03-09 00:57:56.828698 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-09 00:57:56.828714 | orchestrator | Monday 09 March 2026 00:50:53 +0000 (0:00:03.320) 0:00:27.349 ********** 2026-03-09 00:57:56.828728 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:56.828744 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:56.828758 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:56.828773 | orchestrator | 2026-03-09 00:57:56.828789 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-09 00:57:56.828806 | orchestrator | Monday 09 March 2026 00:50:56 +0000 (0:00:02.442) 0:00:29.792 ********** 2026-03-09 00:57:56.828822 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:56.828911 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:56.828927 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:56.828943 | orchestrator | 2026-03-09 00:57:56.828958 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-09 00:57:56.828973 | orchestrator | Monday 09 March 2026 00:51:00 +0000 (0:00:03.976) 0:00:33.769 ********** 2026-03-09 00:57:56.828990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-09 00:57:56.829041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:57:56.829059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:57:56.829077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__31329af9fb33f049296602e01f9478677bc1c111', '__omit_place_holder__31329af9fb33f049296602e01f9478677bc1c111'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-09 00:57:56.829094 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.829111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-09 00:57:56.829127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:57:56.829143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-09 00:57:56.829168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:57:56.829201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__31329af9fb33f049296602e01f9478677bc1c111', '__omit_place_holder__31329af9fb33f049296602e01f9478677bc1c111'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-09 00:57:56.829217 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.829233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:57:56.829250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:57:56.829267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__31329af9fb33f049296602e01f9478677bc1c111', '__omit_place_holder__31329af9fb33f049296602e01f9478677bc1c111'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-09 00:57:56.829284 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.829300 | orchestrator | 2026-03-09 00:57:56.829317 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-09 00:57:56.829334 | orchestrator | Monday 09 March 2026 00:51:01 +0000 (0:00:01.050) 0:00:34.820 ********** 2026-03-09 00:57:56.829352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-09 00:57:56.829379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-09 00:57:56.829421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-09 00:57:56.829440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:57:56.829457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:57:56.829474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__31329af9fb33f049296602e01f9478677bc1c111', '__omit_place_holder__31329af9fb33f049296602e01f9478677bc1c111'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-09 00:57:56.829491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:57:56.829508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:57:56.829536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__31329af9fb33f049296602e01f9478677bc1c111', '__omit_place_holder__31329af9fb33f049296602e01f9478677bc1c111'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-09 00:57:56.829568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:57:56.829584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:57:56.829601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__31329af9fb33f049296602e01f9478677bc1c111', '__omit_place_holder__31329af9fb33f049296602e01f9478677bc1c111'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-09 00:57:56.829616 | orchestrator | 2026-03-09 00:57:56.829631 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-09 00:57:56.829647 | orchestrator | Monday 09 March 2026 00:51:06 +0000 (0:00:05.103) 0:00:39.923 ********** 2026-03-09 00:57:56.829665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-09 00:57:56.829683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-09 00:57:56.829716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-09 00:57:56.829762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:57:56.829779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:57:56.829797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:57:56.829815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-09 00:57:56.829832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-09 00:57:56.829883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-09 00:57:56.829902 | orchestrator | 2026-03-09 00:57:56.829918 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-09 00:57:56.829934 | orchestrator | Monday 09 March 2026 00:51:11 +0000 (0:00:04.657) 0:00:44.581 ********** 2026-03-09 00:57:56.829950 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-09 00:57:56.829968 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-09 00:57:56.829984 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-09 00:57:56.830001 | orchestrator | 2026-03-09 00:57:56.830087 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-09 00:57:56.830112 | orchestrator | Monday 09 March 2026 00:51:14 +0000 (0:00:03.014) 0:00:47.595 ********** 2026-03-09 00:57:56.830130 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-09 00:57:56.830148 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-09 00:57:56.830166 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-09 00:57:56.830184 | orchestrator | 2026-03-09 00:57:56.831004 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-09 00:57:56.831085 | orchestrator | Monday 09 March 2026 00:51:19 +0000 (0:00:05.631) 0:00:53.227 ********** 2026-03-09 00:57:56.831101 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.831114 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.831126 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.831137 | orchestrator | 2026-03-09 00:57:56.831149 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-09 00:57:56.831156 | orchestrator | Monday 09 March 2026 00:51:20 +0000 (0:00:00.910) 0:00:54.137 ********** 2026-03-09 00:57:56.831164 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-09 00:57:56.831172 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-09 00:57:56.831179 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-09 00:57:56.831185 | orchestrator | 2026-03-09 00:57:56.831192 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-09 00:57:56.831199 | orchestrator | Monday 09 March 2026 00:51:23 +0000 (0:00:02.713) 0:00:56.850 ********** 2026-03-09 00:57:56.831206 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-09 00:57:56.831213 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-09 00:57:56.831220 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-09 00:57:56.831226 | orchestrator | 2026-03-09 00:57:56.831233 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-09 00:57:56.831255 | orchestrator | Monday 09 March 2026 00:51:25 +0000 (0:00:02.521) 0:00:59.372 ********** 2026-03-09 00:57:56.831263 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:56.831269 | orchestrator | 2026-03-09 00:57:56.831276 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-09 00:57:56.831283 | orchestrator | Monday 09 March 2026 00:51:26 +0000 (0:00:00.604) 0:00:59.976 ********** 2026-03-09 00:57:56.831290 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-03-09 00:57:56.831297 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-03-09 00:57:56.831304 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-03-09 00:57:56.831311 | orchestrator | 2026-03-09 00:57:56.831317 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-09 00:57:56.831324 | orchestrator | Monday 09 March 2026 00:51:28 +0000 (0:00:01.737) 0:01:01.714 ********** 2026-03-09 00:57:56.831332 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-09 00:57:56.831339 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-09 00:57:56.831345 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-09 00:57:56.831352 | orchestrator | 2026-03-09 00:57:56.831359 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-03-09 00:57:56.831366 | orchestrator | Monday 09 March 2026 00:51:31 +0000 (0:00:03.313) 0:01:05.027 ********** 2026-03-09 00:57:56.831372 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.831379 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.831386 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.831393 | orchestrator | 2026-03-09 00:57:56.831399 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-03-09 00:57:56.831407 | orchestrator | Monday 09 March 2026 00:51:32 +0000 (0:00:00.383) 0:01:05.411 ********** 2026-03-09 00:57:56.831418 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.831428 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.831438 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.831450 | orchestrator | 2026-03-09 00:57:56.831463 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-09 00:57:56.831475 | orchestrator | Monday 09 March 2026 00:51:32 +0000 (0:00:00.508) 0:01:05.919 ********** 2026-03-09 00:57:56.831489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-09 00:57:56.831526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-09 00:57:56.831539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-09 00:57:56.831560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:57:56.831571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:57:56.831584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:57:56.831591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-09 00:57:56.831600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-09 00:57:56.831614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-09 00:57:56.831621 | orchestrator | 2026-03-09 00:57:56.831629 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-09 00:57:56.831644 | orchestrator | Monday 09 March 2026 00:51:35 +0000 (0:00:03.368) 0:01:09.288 ********** 2026-03-09 00:57:56.831724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-09 00:57:56.831744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:57:56.831756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:57:56.831768 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.831780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-09 00:57:56.831791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:57:56.831803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:57:56.831814 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.831910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-09 00:57:56.831937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:57:56.831949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:57:56.831959 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.831969 | orchestrator | 2026-03-09 00:57:56.831983 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-09 00:57:56.831995 | orchestrator | Monday 09 March 2026 00:51:37 +0000 (0:00:01.122) 0:01:10.410 ********** 2026-03-09 00:57:56.832006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-09 00:57:56.832017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:57:56.832030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:57:56.832041 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.832063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-09 00:57:56.832081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:57:56.832093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:57:56.832104 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.832114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-09 00:57:56.832125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:57:56.832135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:57:56.832146 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.832156 | orchestrator | 2026-03-09 00:57:56.832167 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-09 00:57:56.832177 | orchestrator | Monday 09 March 2026 00:51:38 +0000 (0:00:01.831) 0:01:12.241 ********** 2026-03-09 00:57:56.832187 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-09 00:57:56.832203 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-09 00:57:56.832214 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-09 00:57:56.832224 | orchestrator | 2026-03-09 00:57:56.832237 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-09 00:57:56.832249 | orchestrator | Monday 09 March 2026 00:51:40 +0000 (0:00:01.732) 0:01:13.974 ********** 2026-03-09 00:57:56.832260 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-09 00:57:56.832275 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-09 00:57:56.832285 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-09 00:57:56.832294 | orchestrator | 2026-03-09 00:57:56.832308 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-09 00:57:56.832319 | orchestrator | Monday 09 March 2026 00:51:42 +0000 (0:00:02.342) 0:01:16.317 ********** 2026-03-09 00:57:56.832329 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-09 00:57:56.832339 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-09 00:57:56.832349 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-09 00:57:56.832359 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.832371 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-09 00:57:56.832381 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-09 00:57:56.832391 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.832401 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-09 00:57:56.832411 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.832422 | orchestrator | 2026-03-09 00:57:56.832432 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-03-09 00:57:56.832442 | orchestrator | Monday 09 March 2026 00:51:44 +0000 (0:00:01.750) 0:01:18.067 ********** 2026-03-09 00:57:56.832453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-09 00:57:56.832463 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-09 00:57:56.832474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-09 00:57:56.832492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:57:56.832514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:57:56.832526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:57:56.832537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-09 00:57:56.832548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-09 00:57:56.832558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-09 00:57:56.832568 | orchestrator | 2026-03-09 00:57:56.832586 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-03-09 00:57:56.832597 | orchestrator | Monday 09 March 2026 00:51:47 +0000 (0:00:02.628) 0:01:20.696 ********** 2026-03-09 00:57:56.832607 | orchestrator | changed: [testbed-node-0] => { 2026-03-09 00:57:56.832618 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:57:56.832628 | orchestrator | } 2026-03-09 00:57:56.832640 | orchestrator | changed: [testbed-node-1] => { 2026-03-09 00:57:56.832652 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:57:56.832662 | orchestrator | } 2026-03-09 00:57:56.832672 | orchestrator | changed: [testbed-node-2] => { 2026-03-09 00:57:56.832686 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:57:56.832698 | orchestrator | } 2026-03-09 00:57:56.832709 | orchestrator | 2026-03-09 00:57:56.832719 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-09 00:57:56.832726 | orchestrator | Monday 09 March 2026 00:51:47 +0000 (0:00:00.358) 0:01:21.055 ********** 2026-03-09 00:57:56.832733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-09 00:57:56.832752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:57:56.832760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:57:56.832767 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.832774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-09 00:57:56.832782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:57:56.832795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:57:56.832802 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.832810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-09 00:57:56.832817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:57:56.832861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:57:56.832870 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.832878 | orchestrator | 2026-03-09 00:57:56.832884 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-09 00:57:56.832891 | orchestrator | Monday 09 March 2026 00:51:48 +0000 (0:00:01.281) 0:01:22.337 ********** 2026-03-09 00:57:56.832898 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:56.832905 | orchestrator | 2026-03-09 00:57:56.832912 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-09 00:57:56.832918 | orchestrator | Monday 09 March 2026 00:51:49 +0000 (0:00:00.746) 0:01:23.084 ********** 2026-03-09 00:57:56.832927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 00:57:56.832943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-09 00:57:56.832952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.832959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.832980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 00:57:56.832989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 00:57:56.832996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-09 00:57:56.833008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-09 00:57:56.833015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.833022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.833034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.833046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.833053 | orchestrator | 2026-03-09 00:57:56.833060 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-09 00:57:56.833067 | orchestrator | Monday 09 March 2026 00:51:55 +0000 (0:00:06.114) 0:01:29.198 ********** 2026-03-09 00:57:56.833074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 00:57:56.833086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-09 00:57:56.833093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.833100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.833107 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.833123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 00:57:56.833131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-09 00:57:56.833144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.833152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.833163 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.833174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 00:57:56.833187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-09 00:57:56.833208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.833223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.833242 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.833253 | orchestrator | 2026-03-09 00:57:56.833264 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-09 00:57:56.833274 | orchestrator | Monday 09 March 2026 00:51:57 +0000 (0:00:01.233) 0:01:30.432 ********** 2026-03-09 00:57:56.833288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.833310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.833323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.833335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.833345 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.833355 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.833366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.833376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.833387 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.833398 | orchestrator | 2026-03-09 00:57:56.833410 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-09 00:57:56.833420 | orchestrator | Monday 09 March 2026 00:51:58 +0000 (0:00:01.682) 0:01:32.114 ********** 2026-03-09 00:57:56.833430 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:56.833440 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:56.833449 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:56.833459 | orchestrator | 2026-03-09 00:57:56.833468 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-09 00:57:56.833479 | orchestrator | Monday 09 March 2026 00:52:00 +0000 (0:00:01.645) 0:01:33.759 ********** 2026-03-09 00:57:56.833489 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:56.833499 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:56.833510 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:56.833520 | orchestrator | 2026-03-09 00:57:56.833529 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-09 00:57:56.833540 | orchestrator | Monday 09 March 2026 00:52:02 +0000 (0:00:02.399) 0:01:36.159 ********** 2026-03-09 00:57:56.833549 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:56.833559 | orchestrator | 2026-03-09 00:57:56.833570 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-09 00:57:56.833581 | orchestrator | Monday 09 March 2026 00:52:03 +0000 (0:00:01.057) 0:01:37.216 ********** 2026-03-09 00:57:56.833611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 00:57:56.833637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.833647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.833656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 00:57:56.833664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 00:57:56.833685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.833693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.833701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.833708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.833716 | orchestrator | 2026-03-09 00:57:56.833722 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-09 00:57:56.833730 | orchestrator | Monday 09 March 2026 00:52:08 +0000 (0:00:05.034) 0:01:42.251 ********** 2026-03-09 00:57:56.833737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 00:57:56.833758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.833766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 00:57:56.833773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.833781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.833788 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.833795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.833802 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.833813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 00:57:56.833885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.833896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.833903 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.833910 | orchestrator | 2026-03-09 00:57:56.833917 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-09 00:57:56.833924 | orchestrator | Monday 09 March 2026 00:52:10 +0000 (0:00:01.183) 0:01:43.434 ********** 2026-03-09 00:57:56.833931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.833939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.833946 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.833953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.833960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.833967 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.833974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.833988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.833995 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.834002 | orchestrator | 2026-03-09 00:57:56.834008 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-09 00:57:56.834062 | orchestrator | Monday 09 March 2026 00:52:11 +0000 (0:00:01.122) 0:01:44.557 ********** 2026-03-09 00:57:56.834072 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:56.834079 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:56.834085 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:56.834092 | orchestrator | 2026-03-09 00:57:56.834099 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-09 00:57:56.834105 | orchestrator | Monday 09 March 2026 00:52:12 +0000 (0:00:01.684) 0:01:46.242 ********** 2026-03-09 00:57:56.834112 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:56.834119 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:56.834126 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:56.834133 | orchestrator | 2026-03-09 00:57:56.834140 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-09 00:57:56.834146 | orchestrator | Monday 09 March 2026 00:52:15 +0000 (0:00:02.819) 0:01:49.062 ********** 2026-03-09 00:57:56.834153 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.834160 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.834167 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.834174 | orchestrator | 2026-03-09 00:57:56.834186 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-09 00:57:56.834193 | orchestrator | Monday 09 March 2026 00:52:15 +0000 (0:00:00.262) 0:01:49.325 ********** 2026-03-09 00:57:56.834206 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:56.834218 | orchestrator | 2026-03-09 00:57:56.834229 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-09 00:57:56.834240 | orchestrator | Monday 09 March 2026 00:52:16 +0000 (0:00:00.875) 0:01:50.200 ********** 2026-03-09 00:57:56.834253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-09 00:57:56.834268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-09 00:57:56.834289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-09 00:57:56.834302 | orchestrator | 2026-03-09 00:57:56.834309 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-09 00:57:56.834316 | orchestrator | Monday 09 March 2026 00:52:22 +0000 (0:00:06.004) 0:01:56.204 ********** 2026-03-09 00:57:56.834323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-09 00:57:56.834330 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.834346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-09 00:57:56.834354 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.834361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-09 00:57:56.834368 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.834375 | orchestrator | 2026-03-09 00:57:56.834382 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-09 00:57:56.834394 | orchestrator | Monday 09 March 2026 00:52:24 +0000 (0:00:01.479) 0:01:57.683 ********** 2026-03-09 00:57:56.834401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-09 00:57:56.834410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-09 00:57:56.834418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-09 00:57:56.834425 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.834432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-09 00:57:56.834439 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.834447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-09 00:57:56.834462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-09 00:57:56.834469 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.834476 | orchestrator | 2026-03-09 00:57:56.834483 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-09 00:57:56.834490 | orchestrator | Monday 09 March 2026 00:52:26 +0000 (0:00:01.831) 0:01:59.515 ********** 2026-03-09 00:57:56.834496 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.834503 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.834510 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.834516 | orchestrator | 2026-03-09 00:57:56.834523 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-09 00:57:56.834530 | orchestrator | Monday 09 March 2026 00:52:26 +0000 (0:00:00.490) 0:02:00.005 ********** 2026-03-09 00:57:56.834536 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.834543 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.834549 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.834556 | orchestrator | 2026-03-09 00:57:56.834562 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-09 00:57:56.834573 | orchestrator | Monday 09 March 2026 00:52:28 +0000 (0:00:01.410) 0:02:01.416 ********** 2026-03-09 00:57:56.834580 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:56.834587 | orchestrator | 2026-03-09 00:57:56.834594 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-09 00:57:56.834600 | orchestrator | Monday 09 March 2026 00:52:29 +0000 (0:00:00.986) 0:02:02.402 ********** 2026-03-09 00:57:56.834608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 00:57:56.834616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.834624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 00:57:56.834640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.834648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.834660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.834668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.834675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.834690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 00:57:56.834698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.834709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.834716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.834723 | orchestrator | 2026-03-09 00:57:56.834730 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-09 00:57:56.834737 | orchestrator | Monday 09 March 2026 00:52:33 +0000 (0:00:04.951) 0:02:07.353 ********** 2026-03-09 00:57:56.834744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 00:57:56.834752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.834766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.834778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 00:57:56.834786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.834793 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.834800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.834807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.834822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.834863 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.834873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 00:57:56.834881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.834888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.834895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.834902 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.834909 | orchestrator | 2026-03-09 00:57:56.834916 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-09 00:57:56.834922 | orchestrator | Monday 09 March 2026 00:52:35 +0000 (0:00:01.030) 0:02:08.384 ********** 2026-03-09 00:57:56.834929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.834941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.834952 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.834959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.834967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.834973 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.834980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.835051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.835068 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.835076 | orchestrator | 2026-03-09 00:57:56.835082 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-09 00:57:56.835089 | orchestrator | Monday 09 March 2026 00:52:36 +0000 (0:00:01.040) 0:02:09.424 ********** 2026-03-09 00:57:56.835096 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:56.835103 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:56.835110 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:56.835117 | orchestrator | 2026-03-09 00:57:56.835123 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-09 00:57:56.835130 | orchestrator | Monday 09 March 2026 00:52:37 +0000 (0:00:01.679) 0:02:11.103 ********** 2026-03-09 00:57:56.835137 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:56.835144 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:56.835151 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:56.835158 | orchestrator | 2026-03-09 00:57:56.835165 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-09 00:57:56.835172 | orchestrator | Monday 09 March 2026 00:52:39 +0000 (0:00:02.123) 0:02:13.226 ********** 2026-03-09 00:57:56.835181 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.835192 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.835207 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.835225 | orchestrator | 2026-03-09 00:57:56.835236 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-09 00:57:56.835248 | orchestrator | Monday 09 March 2026 00:52:40 +0000 (0:00:00.364) 0:02:13.591 ********** 2026-03-09 00:57:56.835260 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.835271 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.835282 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.835293 | orchestrator | 2026-03-09 00:57:56.835304 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-09 00:57:56.835316 | orchestrator | Monday 09 March 2026 00:52:40 +0000 (0:00:00.365) 0:02:13.957 ********** 2026-03-09 00:57:56.835344 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:56.835356 | orchestrator | 2026-03-09 00:57:56.835377 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-09 00:57:56.835388 | orchestrator | Monday 09 March 2026 00:52:41 +0000 (0:00:01.047) 0:02:15.004 ********** 2026-03-09 00:57:56.835401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 00:57:56.835441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 00:57:56.835457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 00:57:56.835464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.835472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 00:57:56.835479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.835491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.835524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.835532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.835539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.835546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.835553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.835566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.835573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.835589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 00:57:56.835596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 00:57:56.835604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.835611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.835625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.835632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.835647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.835654 | orchestrator | 2026-03-09 00:57:56.835661 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-09 00:57:56.835668 | orchestrator | Monday 09 March 2026 00:52:46 +0000 (0:00:04.655) 0:02:19.659 ********** 2026-03-09 00:57:56.835678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 00:57:56.835691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 00:57:56.835711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 00:57:56.835722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 00:57:56.835745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.835757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.835769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.835780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.835800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.835812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.835824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.835873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.835887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.835899 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.835910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 00:57:56.835923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.835931 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.835938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 00:57:56.835945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.835965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.835973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.835980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.835993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.836000 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.836007 | orchestrator | 2026-03-09 00:57:56.836014 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-09 00:57:56.836021 | orchestrator | Monday 09 March 2026 00:52:47 +0000 (0:00:00.964) 0:02:20.623 ********** 2026-03-09 00:57:56.836028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.836037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.836045 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.836052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.836059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.836066 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.836073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.836081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.836087 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.836094 | orchestrator | 2026-03-09 00:57:56.836105 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-09 00:57:56.836112 | orchestrator | Monday 09 March 2026 00:52:48 +0000 (0:00:01.287) 0:02:21.910 ********** 2026-03-09 00:57:56.836123 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:56.836130 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:56.836136 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:56.836143 | orchestrator | 2026-03-09 00:57:56.836150 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-09 00:57:56.836157 | orchestrator | Monday 09 March 2026 00:52:49 +0000 (0:00:01.322) 0:02:23.233 ********** 2026-03-09 00:57:56.836164 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:56.836170 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:56.836177 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:56.836184 | orchestrator | 2026-03-09 00:57:56.836191 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-09 00:57:56.836198 | orchestrator | Monday 09 March 2026 00:52:51 +0000 (0:00:01.977) 0:02:25.211 ********** 2026-03-09 00:57:56.836209 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.836216 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.836223 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.836230 | orchestrator | 2026-03-09 00:57:56.836236 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-09 00:57:56.836243 | orchestrator | Monday 09 March 2026 00:52:52 +0000 (0:00:00.319) 0:02:25.530 ********** 2026-03-09 00:57:56.836249 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:56.836256 | orchestrator | 2026-03-09 00:57:56.836263 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-09 00:57:56.836269 | orchestrator | Monday 09 March 2026 00:52:53 +0000 (0:00:00.952) 0:02:26.483 ********** 2026-03-09 00:57:56.836277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 00:57:56.836295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-09 00:57:56.836308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 00:57:56.836325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 00:57:56.836333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-09 00:57:56.836346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-09 00:57:56.836354 | orchestrator | 2026-03-09 00:57:56.836367 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-09 00:57:56.836379 | orchestrator | Monday 09 March 2026 00:52:56 +0000 (0:00:03.877) 0:02:30.360 ********** 2026-03-09 00:57:56.836401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-09 00:57:56.836414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-09 00:57:56.836426 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.836449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-09 00:57:56.836469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-09 00:57:56.836482 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.836505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-09 00:57:56.836526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-09 00:57:56.836534 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.836541 | orchestrator | 2026-03-09 00:57:56.836548 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-09 00:57:56.836554 | orchestrator | Monday 09 March 2026 00:53:00 +0000 (0:00:03.168) 0:02:33.529 ********** 2026-03-09 00:57:56.836562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-09 00:57:56.836578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-09 00:57:56.836593 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.836600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-09 00:57:56.836607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-09 00:57:56.836614 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.836621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-09 00:57:56.836628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-09 00:57:56.836635 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.836642 | orchestrator | 2026-03-09 00:57:56.836649 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-09 00:57:56.836656 | orchestrator | Monday 09 March 2026 00:53:04 +0000 (0:00:04.358) 0:02:37.887 ********** 2026-03-09 00:57:56.836662 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:56.836669 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:56.836676 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:56.836682 | orchestrator | 2026-03-09 00:57:56.836689 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-09 00:57:56.836696 | orchestrator | Monday 09 March 2026 00:53:06 +0000 (0:00:01.527) 0:02:39.415 ********** 2026-03-09 00:57:56.836702 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:56.836709 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:56.836715 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:56.836722 | orchestrator | 2026-03-09 00:57:56.836729 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-09 00:57:56.836735 | orchestrator | Monday 09 March 2026 00:53:08 +0000 (0:00:02.240) 0:02:41.655 ********** 2026-03-09 00:57:56.836747 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.836754 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.836760 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.836767 | orchestrator | 2026-03-09 00:57:56.836773 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-09 00:57:56.836780 | orchestrator | Monday 09 March 2026 00:53:08 +0000 (0:00:00.334) 0:02:41.990 ********** 2026-03-09 00:57:56.836787 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:56.836793 | orchestrator | 2026-03-09 00:57:56.836800 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-09 00:57:56.836808 | orchestrator | Monday 09 March 2026 00:53:09 +0000 (0:00:01.160) 0:02:43.151 ********** 2026-03-09 00:57:56.836856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 00:57:56.836871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 00:57:56.836883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 00:57:56.836895 | orchestrator | 2026-03-09 00:57:56.836905 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-09 00:57:56.836915 | orchestrator | Monday 09 March 2026 00:53:13 +0000 (0:00:03.292) 0:02:46.443 ********** 2026-03-09 00:57:56.836928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 00:57:56.836949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 00:57:56.836961 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.836972 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.838094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 00:57:56.838204 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.838224 | orchestrator | 2026-03-09 00:57:56.838232 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-09 00:57:56.838240 | orchestrator | Monday 09 March 2026 00:53:13 +0000 (0:00:00.458) 0:02:46.902 ********** 2026-03-09 00:57:56.838247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.838257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.838265 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.838272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.838279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.838286 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.838292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.838299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.838306 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.838313 | orchestrator | 2026-03-09 00:57:56.838320 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-09 00:57:56.838380 | orchestrator | Monday 09 March 2026 00:53:14 +0000 (0:00:00.741) 0:02:47.643 ********** 2026-03-09 00:57:56.838387 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:56.838395 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:56.838435 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:56.838443 | orchestrator | 2026-03-09 00:57:56.838536 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-09 00:57:56.838548 | orchestrator | Monday 09 March 2026 00:53:16 +0000 (0:00:01.831) 0:02:49.474 ********** 2026-03-09 00:57:56.838560 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:56.838571 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:56.838582 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:56.838593 | orchestrator | 2026-03-09 00:57:56.838603 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-09 00:57:56.838614 | orchestrator | Monday 09 March 2026 00:53:19 +0000 (0:00:02.997) 0:02:52.472 ********** 2026-03-09 00:57:56.838625 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.838635 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.838647 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.838659 | orchestrator | 2026-03-09 00:57:56.838670 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-09 00:57:56.838681 | orchestrator | Monday 09 March 2026 00:53:19 +0000 (0:00:00.507) 0:02:52.979 ********** 2026-03-09 00:57:56.838693 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:56.838705 | orchestrator | 2026-03-09 00:57:56.838716 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-09 00:57:56.838727 | orchestrator | Monday 09 March 2026 00:53:20 +0000 (0:00:01.134) 0:02:54.114 ********** 2026-03-09 00:57:56.838772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-09 00:57:56.838792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-09 00:57:56.838813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-09 00:57:56.838829 | orchestrator | 2026-03-09 00:57:56.838862 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-09 00:57:56.838876 | orchestrator | Monday 09 March 2026 00:53:25 +0000 (0:00:05.236) 0:02:59.350 ********** 2026-03-09 00:57:56.838896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-09 00:57:56.838911 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.838930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-09 00:57:56.838949 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.838971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-09 00:57:56.838979 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.838986 | orchestrator | 2026-03-09 00:57:56.838993 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-09 00:57:56.839000 | orchestrator | Monday 09 March 2026 00:53:26 +0000 (0:00:00.822) 0:03:00.173 ********** 2026-03-09 00:57:56.839008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-09 00:57:56.839021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-09 00:57:56.839029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-09 00:57:56.839038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-09 00:57:56.839045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-09 00:57:56.839053 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.839060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-09 00:57:56.839067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-09 00:57:56.839074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-09 00:57:56.839086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-09 00:57:56.839093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-09 00:57:56.839099 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.839111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-09 00:57:56.839128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-09 00:57:56.839135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-09 00:57:56.839147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-09 00:57:56.839154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-09 00:57:56.839195 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.839208 | orchestrator | 2026-03-09 00:57:56.839219 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-09 00:57:56.839307 | orchestrator | Monday 09 March 2026 00:53:28 +0000 (0:00:01.227) 0:03:01.400 ********** 2026-03-09 00:57:56.839320 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:56.839332 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:56.839343 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:56.839355 | orchestrator | 2026-03-09 00:57:56.839366 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-09 00:57:56.839377 | orchestrator | Monday 09 March 2026 00:53:29 +0000 (0:00:01.736) 0:03:03.136 ********** 2026-03-09 00:57:56.839388 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:56.839400 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:56.839411 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:56.839423 | orchestrator | 2026-03-09 00:57:56.839433 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-09 00:57:56.839444 | orchestrator | Monday 09 March 2026 00:53:32 +0000 (0:00:02.505) 0:03:05.642 ********** 2026-03-09 00:57:56.839456 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.839468 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.839478 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.839490 | orchestrator | 2026-03-09 00:57:56.839498 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-09 00:57:56.839505 | orchestrator | Monday 09 March 2026 00:53:32 +0000 (0:00:00.522) 0:03:06.165 ********** 2026-03-09 00:57:56.839511 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.839518 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.839524 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.839536 | orchestrator | 2026-03-09 00:57:56.839546 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-09 00:57:56.839558 | orchestrator | Monday 09 March 2026 00:53:33 +0000 (0:00:00.430) 0:03:06.595 ********** 2026-03-09 00:57:56.839570 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:56.839582 | orchestrator | 2026-03-09 00:57:56.839594 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-09 00:57:56.839604 | orchestrator | Monday 09 March 2026 00:53:34 +0000 (0:00:01.406) 0:03:08.002 ********** 2026-03-09 00:57:56.839618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-09 00:57:56.839662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 00:57:56.839672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 00:57:56.839680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-09 00:57:56.839688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 00:57:56.839695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 00:57:56.839716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-09 00:57:56.839728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 00:57:56.839736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 00:57:56.839743 | orchestrator | 2026-03-09 00:57:56.839750 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-09 00:57:56.839757 | orchestrator | Monday 09 March 2026 00:53:38 +0000 (0:00:03.937) 0:03:11.940 ********** 2026-03-09 00:57:56.839764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-09 00:57:56.839772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 00:57:56.839784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 00:57:56.839791 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.839808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-09 00:57:56.839816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 00:57:56.839823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 00:57:56.839831 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.839872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-09 00:57:56.839887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 00:57:56.839993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 00:57:56.840019 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.840032 | orchestrator | 2026-03-09 00:57:56.840040 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-09 00:57:56.840057 | orchestrator | Monday 09 March 2026 00:53:39 +0000 (0:00:00.735) 0:03:12.675 ********** 2026-03-09 00:57:56.840065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-09 00:57:56.840073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-09 00:57:56.840081 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.840088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-09 00:57:56.840096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-09 00:57:56.840103 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.840110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-09 00:57:56.840117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-09 00:57:56.840124 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.840131 | orchestrator | 2026-03-09 00:57:56.840138 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-09 00:57:56.840152 | orchestrator | Monday 09 March 2026 00:53:40 +0000 (0:00:01.537) 0:03:14.213 ********** 2026-03-09 00:57:56.840159 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:56.840166 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:56.840173 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:56.840179 | orchestrator | 2026-03-09 00:57:56.840186 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-09 00:57:56.840193 | orchestrator | Monday 09 March 2026 00:53:42 +0000 (0:00:01.461) 0:03:15.674 ********** 2026-03-09 00:57:56.840199 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:56.840276 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:56.840284 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:56.840290 | orchestrator | 2026-03-09 00:57:56.840297 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-09 00:57:56.840307 | orchestrator | Monday 09 March 2026 00:53:44 +0000 (0:00:02.475) 0:03:18.149 ********** 2026-03-09 00:57:56.840319 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.840330 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.840341 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.840352 | orchestrator | 2026-03-09 00:57:56.840364 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-09 00:57:56.840376 | orchestrator | Monday 09 March 2026 00:53:45 +0000 (0:00:00.604) 0:03:18.754 ********** 2026-03-09 00:57:56.840388 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:56.840399 | orchestrator | 2026-03-09 00:57:56.840412 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-09 00:57:56.840425 | orchestrator | Monday 09 March 2026 00:53:47 +0000 (0:00:02.455) 0:03:21.209 ********** 2026-03-09 00:57:56.840458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 00:57:56.840475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.840489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 00:57:56.840512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.840525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 00:57:56.840552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.840567 | orchestrator | 2026-03-09 00:57:56.840580 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-09 00:57:56.840594 | orchestrator | Monday 09 March 2026 00:53:51 +0000 (0:00:04.170) 0:03:25.380 ********** 2026-03-09 00:57:56.840607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 00:57:56.840630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.840643 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.840656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 00:57:56.840681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 00:57:56.840693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.840705 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.840724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.840735 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.840751 | orchestrator | 2026-03-09 00:57:56.840763 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-09 00:57:56.840777 | orchestrator | Monday 09 March 2026 00:53:52 +0000 (0:00:00.808) 0:03:26.188 ********** 2026-03-09 00:57:56.840791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.840804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.840818 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.840831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.840965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.840986 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.841000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.841013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.841024 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.841035 | orchestrator | 2026-03-09 00:57:56.841046 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-09 00:57:56.841058 | orchestrator | Monday 09 March 2026 00:53:53 +0000 (0:00:01.005) 0:03:27.193 ********** 2026-03-09 00:57:56.841078 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:56.841090 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:56.841111 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:56.841197 | orchestrator | 2026-03-09 00:57:56.841264 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-09 00:57:56.841276 | orchestrator | Monday 09 March 2026 00:53:55 +0000 (0:00:01.791) 0:03:28.985 ********** 2026-03-09 00:57:56.841284 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:56.841291 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:56.841298 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:56.841304 | orchestrator | 2026-03-09 00:57:56.841311 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-09 00:57:56.841318 | orchestrator | Monday 09 March 2026 00:53:58 +0000 (0:00:02.600) 0:03:31.586 ********** 2026-03-09 00:57:56.841325 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:56.841342 | orchestrator | 2026-03-09 00:57:56.841348 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-09 00:57:56.841355 | orchestrator | Monday 09 March 2026 00:53:59 +0000 (0:00:01.221) 0:03:32.808 ********** 2026-03-09 00:57:56.841364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 00:57:56.841372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.841380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.841389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.841409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 00:57:56.841422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.841429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.841436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.841444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 00:57:56.841451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.841466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.841478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.841485 | orchestrator | 2026-03-09 00:57:56.841491 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-09 00:57:56.841501 | orchestrator | Monday 09 March 2026 00:54:04 +0000 (0:00:05.316) 0:03:38.124 ********** 2026-03-09 00:57:56.841510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 00:57:56.841517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.841524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.841530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.841545 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.841559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 00:57:56.841567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.841574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.841580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.841587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 00:57:56.841594 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.841612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.841619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.841627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.841634 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.841640 | orchestrator | 2026-03-09 00:57:56.841647 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-09 00:57:56.841653 | orchestrator | Monday 09 March 2026 00:54:05 +0000 (0:00:00.799) 0:03:38.924 ********** 2026-03-09 00:57:56.841659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.841666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.841673 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.841679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.841686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.841692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.841699 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.841705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.841712 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.841728 | orchestrator | 2026-03-09 00:57:56.841735 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-09 00:57:56.841742 | orchestrator | Monday 09 March 2026 00:54:06 +0000 (0:00:01.125) 0:03:40.050 ********** 2026-03-09 00:57:56.841748 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:56.841755 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:56.841761 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:56.841767 | orchestrator | 2026-03-09 00:57:56.841775 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-09 00:57:56.841786 | orchestrator | Monday 09 March 2026 00:54:08 +0000 (0:00:01.390) 0:03:41.441 ********** 2026-03-09 00:57:56.841795 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:56.841805 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:56.841815 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:56.841824 | orchestrator | 2026-03-09 00:57:56.841857 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-09 00:57:56.841869 | orchestrator | Monday 09 March 2026 00:54:10 +0000 (0:00:02.495) 0:03:43.936 ********** 2026-03-09 00:57:56.841886 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:56.841897 | orchestrator | 2026-03-09 00:57:56.841909 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-09 00:57:56.841924 | orchestrator | Monday 09 March 2026 00:54:11 +0000 (0:00:01.430) 0:03:45.366 ********** 2026-03-09 00:57:56.841934 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-09 00:57:56.841940 | orchestrator | 2026-03-09 00:57:56.841947 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-09 00:57:56.841953 | orchestrator | Monday 09 March 2026 00:54:15 +0000 (0:00:03.381) 0:03:48.748 ********** 2026-03-09 00:57:56.841961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 00:57:56.841969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-09 00:57:56.841982 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.841997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 00:57:56.842005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-09 00:57:56.842165 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.842182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 00:57:56.842197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-09 00:57:56.842204 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.842210 | orchestrator | 2026-03-09 00:57:56.842216 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-09 00:57:56.842223 | orchestrator | Monday 09 March 2026 00:54:18 +0000 (0:00:03.002) 0:03:51.751 ********** 2026-03-09 00:57:56.842242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 00:57:56.842250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-09 00:57:56.842257 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.842268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 00:57:56.842283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-09 00:57:56.842291 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.842297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 00:57:56.842309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-09 00:57:56.842315 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.842322 | orchestrator | 2026-03-09 00:57:56.842328 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-09 00:57:56.842334 | orchestrator | Monday 09 March 2026 00:54:22 +0000 (0:00:03.696) 0:03:55.448 ********** 2026-03-09 00:57:56.842341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-09 00:57:56.842356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-09 00:57:56.842363 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.842369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-09 00:57:56.842376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-09 00:57:56.842383 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.842390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-09 00:57:56.842401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-09 00:57:56.842408 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.842418 | orchestrator | 2026-03-09 00:57:56.842428 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-09 00:57:56.842439 | orchestrator | Monday 09 March 2026 00:54:25 +0000 (0:00:03.760) 0:03:59.208 ********** 2026-03-09 00:57:56.842450 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:56.842461 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:56.842473 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:56.842483 | orchestrator | 2026-03-09 00:57:56.842493 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-09 00:57:56.842504 | orchestrator | Monday 09 March 2026 00:54:28 +0000 (0:00:02.411) 0:04:01.620 ********** 2026-03-09 00:57:56.842512 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.842518 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.842524 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.842531 | orchestrator | 2026-03-09 00:57:56.842537 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-09 00:57:56.842543 | orchestrator | Monday 09 March 2026 00:54:30 +0000 (0:00:02.061) 0:04:03.682 ********** 2026-03-09 00:57:56.842549 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.842556 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.842562 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.842568 | orchestrator | 2026-03-09 00:57:56.842574 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-09 00:57:56.842581 | orchestrator | Monday 09 March 2026 00:54:30 +0000 (0:00:00.399) 0:04:04.081 ********** 2026-03-09 00:57:56.842587 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:56.842594 | orchestrator | 2026-03-09 00:57:56.842600 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-09 00:57:56.842606 | orchestrator | Monday 09 March 2026 00:54:32 +0000 (0:00:01.492) 0:04:05.574 ********** 2026-03-09 00:57:56.842625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-09 00:57:56.842633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-09 00:57:56.842645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-09 00:57:56.842653 | orchestrator | 2026-03-09 00:57:56.842659 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-09 00:57:56.842665 | orchestrator | Monday 09 March 2026 00:54:33 +0000 (0:00:01.698) 0:04:07.272 ********** 2026-03-09 00:57:56.842672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-09 00:57:56.842678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-09 00:57:56.842685 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.842692 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.843445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-09 00:57:56.843513 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.843561 | orchestrator | 2026-03-09 00:57:56.843572 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-09 00:57:56.843648 | orchestrator | Monday 09 March 2026 00:54:34 +0000 (0:00:00.410) 0:04:07.682 ********** 2026-03-09 00:57:56.843660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-09 00:57:56.843669 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.843677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-09 00:57:56.843685 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.843697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-09 00:57:56.843712 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.843726 | orchestrator | 2026-03-09 00:57:56.843740 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-09 00:57:56.843755 | orchestrator | Monday 09 March 2026 00:54:35 +0000 (0:00:00.975) 0:04:08.658 ********** 2026-03-09 00:57:56.843769 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.843782 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.843795 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.843804 | orchestrator | 2026-03-09 00:57:56.843812 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-09 00:57:56.843819 | orchestrator | Monday 09 March 2026 00:54:35 +0000 (0:00:00.484) 0:04:09.142 ********** 2026-03-09 00:57:56.843827 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.843855 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.843863 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.843871 | orchestrator | 2026-03-09 00:57:56.843879 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-09 00:57:56.843886 | orchestrator | Monday 09 March 2026 00:54:37 +0000 (0:00:01.447) 0:04:10.589 ********** 2026-03-09 00:57:56.843894 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.843902 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.843910 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.843918 | orchestrator | 2026-03-09 00:57:56.843926 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-09 00:57:56.843933 | orchestrator | Monday 09 March 2026 00:54:37 +0000 (0:00:00.307) 0:04:10.897 ********** 2026-03-09 00:57:56.843941 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:56.843949 | orchestrator | 2026-03-09 00:57:56.843957 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-09 00:57:56.843965 | orchestrator | Monday 09 March 2026 00:54:39 +0000 (0:00:01.490) 0:04:12.387 ********** 2026-03-09 00:57:56.843975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 00:57:56.844015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.844028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-09 00:57:56.844038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-09 00:57:56.844049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.844061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-09 00:57:56.844082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-09 00:57:56.844112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-09 00:57:56.844123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 00:57:56.844134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 00:57:56.844144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.844168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 00:57:56.844179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.844188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-09 00:57:56.844198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.844208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-09 00:57:56.844232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-09 00:57:56.844242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-09 00:57:56.844252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.844262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-09 00:57:56.844272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-09 00:57:56.844391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-09 00:57:56.844412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.844421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.844430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-09 00:57:56.844438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-09 00:57:56.844447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-09 00:57:56.844457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-09 00:57:56.844480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-09 00:57:56.844506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-09 00:57:56.844516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 00:57:56.844525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-09 00:57:56.844533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 00:57:56.844542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.844557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.844574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-09 00:57:56.844583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-09 00:57:56.844592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-09 00:57:56.844600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.844609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-09 00:57:56.844623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-09 00:57:56.844640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-09 00:57:56.844649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.844658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-09 00:57:56.844667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-09 00:57:56.844683 | orchestrator | 2026-03-09 00:57:56.844691 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-09 00:57:56.844700 | orchestrator | Monday 09 March 2026 00:54:44 +0000 (0:00:05.056) 0:04:17.443 ********** 2026-03-09 00:57:56.844708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 00:57:56.844725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.844735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-09 00:57:56.844743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 00:57:56.844758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-09 00:57:56.844774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.844783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.844792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 00:57:56.844801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-09 00:57:56.844815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-09 00:57:56.844824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.844857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-09 00:57:56.844867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-09 00:57:56.844875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-09 00:57:56.844889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.844898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-09 00:57:56.844912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-09 00:57:56.844921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-09 00:57:56.844940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 00:57:56.844953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-09 00:57:56.844962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.844970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.845006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-09 00:57:56.845022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-09 00:57:56.845030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-09 00:57:56.845039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 00:57:56.845052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-09 00:57:56.845061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-09 00:57:56.845069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.845088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.845097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-09 00:57:56.845168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-09 00:57:56.845183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-09 00:57:56.845192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 00:57:56.845200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-09 00:57:56.845218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.845227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-09 00:57:56.845235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.845249 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.845258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-09 00:57:56.845267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-09 00:57:56.845287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-09 00:57:56.846173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.846224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-09 00:57:56.846231 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.846238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-09 00:57:56.846254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-09 00:57:56.846260 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.846265 | orchestrator | 2026-03-09 00:57:56.846271 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-09 00:57:56.846277 | orchestrator | Monday 09 March 2026 00:54:46 +0000 (0:00:02.005) 0:04:19.449 ********** 2026-03-09 00:57:56.846283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.846293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.846302 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.846311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.846320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.846329 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.846338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.846362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.846373 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.846383 | orchestrator | 2026-03-09 00:57:56.846391 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-09 00:57:56.846396 | orchestrator | Monday 09 March 2026 00:54:48 +0000 (0:00:02.433) 0:04:21.883 ********** 2026-03-09 00:57:56.846402 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:56.846408 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:56.846418 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:56.846424 | orchestrator | 2026-03-09 00:57:56.846429 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-09 00:57:56.846439 | orchestrator | Monday 09 March 2026 00:54:49 +0000 (0:00:01.367) 0:04:23.250 ********** 2026-03-09 00:57:56.846445 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:56.846450 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:56.846456 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:56.846461 | orchestrator | 2026-03-09 00:57:56.846466 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-09 00:57:56.846472 | orchestrator | Monday 09 March 2026 00:54:52 +0000 (0:00:02.417) 0:04:25.668 ********** 2026-03-09 00:57:56.846477 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:56.846483 | orchestrator | 2026-03-09 00:57:56.846488 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-09 00:57:56.846493 | orchestrator | Monday 09 March 2026 00:54:53 +0000 (0:00:01.543) 0:04:27.211 ********** 2026-03-09 00:57:56.846499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-09 00:57:56.846540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-09 00:57:56.846557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-09 00:57:56.846569 | orchestrator | 2026-03-09 00:57:56.846575 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-09 00:57:56.846581 | orchestrator | Monday 09 March 2026 00:54:57 +0000 (0:00:03.920) 0:04:31.132 ********** 2026-03-09 00:57:56.846587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-09 00:57:56.846609 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.846616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-09 00:57:56.846622 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.846628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-09 00:57:56.846666 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.846673 | orchestrator | 2026-03-09 00:57:56.846679 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-09 00:57:56.846684 | orchestrator | Monday 09 March 2026 00:54:58 +0000 (0:00:00.600) 0:04:31.732 ********** 2026-03-09 00:57:56.846694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-09 00:57:56.846718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-09 00:57:56.846725 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.846731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-09 00:57:56.846737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-09 00:57:56.846742 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.846752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-09 00:57:56.846762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-09 00:57:56.846773 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.846782 | orchestrator | 2026-03-09 00:57:56.846792 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-09 00:57:56.846802 | orchestrator | Monday 09 March 2026 00:54:59 +0000 (0:00:01.318) 0:04:33.051 ********** 2026-03-09 00:57:56.846812 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:56.846822 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:56.846870 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:56.846881 | orchestrator | 2026-03-09 00:57:56.846886 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-09 00:57:56.846892 | orchestrator | Monday 09 March 2026 00:55:01 +0000 (0:00:01.470) 0:04:34.521 ********** 2026-03-09 00:57:56.846897 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:56.846903 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:56.846908 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:56.846914 | orchestrator | 2026-03-09 00:57:56.846919 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-09 00:57:56.846924 | orchestrator | Monday 09 March 2026 00:55:03 +0000 (0:00:02.852) 0:04:37.373 ********** 2026-03-09 00:57:56.846930 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:56.846935 | orchestrator | 2026-03-09 00:57:56.846940 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-09 00:57:56.846946 | orchestrator | Monday 09 March 2026 00:55:05 +0000 (0:00:01.875) 0:04:39.249 ********** 2026-03-09 00:57:56.846952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 00:57:56.846972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 00:57:56.846979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 00:57:56.846986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 00:57:56.846992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.847003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.847016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 00:57:56.847023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 00:57:56.847029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.847035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.847044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.847053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.847059 | orchestrator | 2026-03-09 00:57:56.847064 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-09 00:57:56.847076 | orchestrator | Monday 09 March 2026 00:55:13 +0000 (0:00:07.399) 0:04:46.649 ********** 2026-03-09 00:57:56.847082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 00:57:56.847088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 00:57:56.847094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.847107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.847112 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.847125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 00:57:56.847131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 00:57:56.847137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.847143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.847153 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.847159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 00:57:56.847171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 00:57:56.847178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.847188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.847197 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.847212 | orchestrator | 2026-03-09 00:57:56.847221 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-09 00:57:56.847231 | orchestrator | Monday 09 March 2026 00:55:14 +0000 (0:00:00.873) 0:04:47.522 ********** 2026-03-09 00:57:56.847241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.847251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.847261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.847270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.847279 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.847289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.847295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.847301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.847314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.847320 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.847325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.847331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.847337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.847342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.847348 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.847353 | orchestrator | 2026-03-09 00:57:56.847359 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-09 00:57:56.847368 | orchestrator | Monday 09 March 2026 00:55:16 +0000 (0:00:02.050) 0:04:49.574 ********** 2026-03-09 00:57:56.847374 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:56.847379 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:56.847385 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:56.847390 | orchestrator | 2026-03-09 00:57:56.847395 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-09 00:57:56.847401 | orchestrator | Monday 09 March 2026 00:55:18 +0000 (0:00:02.771) 0:04:52.345 ********** 2026-03-09 00:57:56.847406 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:56.847411 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:56.847417 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:56.847423 | orchestrator | 2026-03-09 00:57:56.847428 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-09 00:57:56.847433 | orchestrator | Monday 09 March 2026 00:55:21 +0000 (0:00:02.486) 0:04:54.831 ********** 2026-03-09 00:57:56.847439 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:56.847444 | orchestrator | 2026-03-09 00:57:56.847450 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-09 00:57:56.847455 | orchestrator | Monday 09 March 2026 00:55:23 +0000 (0:00:02.089) 0:04:56.921 ********** 2026-03-09 00:57:56.847461 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-09 00:57:56.847467 | orchestrator | 2026-03-09 00:57:56.847473 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-09 00:57:56.847478 | orchestrator | Monday 09 March 2026 00:55:25 +0000 (0:00:01.840) 0:04:58.761 ********** 2026-03-09 00:57:56.847484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-09 00:57:56.847490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-09 00:57:56.847499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-09 00:57:56.847505 | orchestrator | 2026-03-09 00:57:56.847513 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-09 00:57:56.847520 | orchestrator | Monday 09 March 2026 00:55:29 +0000 (0:00:04.224) 0:05:02.985 ********** 2026-03-09 00:57:56.847526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-09 00:57:56.847535 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.847541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-09 00:57:56.847547 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.847553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-09 00:57:56.847558 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.847564 | orchestrator | 2026-03-09 00:57:56.847569 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-09 00:57:56.847575 | orchestrator | Monday 09 March 2026 00:55:31 +0000 (0:00:01.618) 0:05:04.604 ********** 2026-03-09 00:57:56.847580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-09 00:57:56.847586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-09 00:57:56.847592 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.847597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-09 00:57:56.847603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-09 00:57:56.847609 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.847614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-09 00:57:56.847620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-09 00:57:56.847625 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.847631 | orchestrator | 2026-03-09 00:57:56.847640 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-09 00:57:56.847649 | orchestrator | Monday 09 March 2026 00:55:33 +0000 (0:00:02.434) 0:05:07.038 ********** 2026-03-09 00:57:56.847658 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:56.847667 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:56.847676 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:56.847685 | orchestrator | 2026-03-09 00:57:56.847695 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-09 00:57:56.847715 | orchestrator | Monday 09 March 2026 00:55:36 +0000 (0:00:02.435) 0:05:09.474 ********** 2026-03-09 00:57:56.847724 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:56.847733 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:56.847742 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:56.847749 | orchestrator | 2026-03-09 00:57:56.847758 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-09 00:57:56.847764 | orchestrator | Monday 09 March 2026 00:55:39 +0000 (0:00:03.350) 0:05:12.824 ********** 2026-03-09 00:57:56.847770 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-09 00:57:56.847776 | orchestrator | 2026-03-09 00:57:56.847781 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-09 00:57:56.847787 | orchestrator | Monday 09 March 2026 00:55:40 +0000 (0:00:00.908) 0:05:13.733 ********** 2026-03-09 00:57:56.847792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-09 00:57:56.847798 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.847804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-09 00:57:56.847810 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.847815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-09 00:57:56.847821 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.847826 | orchestrator | 2026-03-09 00:57:56.847845 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-09 00:57:56.847855 | orchestrator | Monday 09 March 2026 00:55:41 +0000 (0:00:01.482) 0:05:15.216 ********** 2026-03-09 00:57:56.847861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-09 00:57:56.847867 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.847872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-09 00:57:56.847882 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.847891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-09 00:57:56.847900 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.847906 | orchestrator | 2026-03-09 00:57:56.847911 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-09 00:57:56.847917 | orchestrator | Monday 09 March 2026 00:55:43 +0000 (0:00:01.861) 0:05:17.077 ********** 2026-03-09 00:57:56.847922 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.847928 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.847933 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.847938 | orchestrator | 2026-03-09 00:57:56.847944 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-09 00:57:56.847949 | orchestrator | Monday 09 March 2026 00:55:45 +0000 (0:00:01.317) 0:05:18.395 ********** 2026-03-09 00:57:56.847955 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:56.847960 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:56.847966 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:56.847971 | orchestrator | 2026-03-09 00:57:56.847977 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-09 00:57:56.847982 | orchestrator | Monday 09 March 2026 00:55:47 +0000 (0:00:02.663) 0:05:21.058 ********** 2026-03-09 00:57:56.847988 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:56.847993 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:56.847998 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:56.848004 | orchestrator | 2026-03-09 00:57:56.848009 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-09 00:57:56.848014 | orchestrator | Monday 09 March 2026 00:55:51 +0000 (0:00:03.459) 0:05:24.517 ********** 2026-03-09 00:57:56.848020 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-09 00:57:56.848025 | orchestrator | 2026-03-09 00:57:56.848031 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-09 00:57:56.848039 | orchestrator | Monday 09 March 2026 00:55:51 +0000 (0:00:00.855) 0:05:25.373 ********** 2026-03-09 00:57:56.848048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-09 00:57:56.848058 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.848068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-09 00:57:56.848084 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.848090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-09 00:57:56.848095 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.848101 | orchestrator | 2026-03-09 00:57:56.848106 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-09 00:57:56.848112 | orchestrator | Monday 09 March 2026 00:55:53 +0000 (0:00:01.523) 0:05:26.896 ********** 2026-03-09 00:57:56.848117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-09 00:57:56.848123 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.848138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-09 00:57:56.848144 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.848150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-09 00:57:56.848155 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.848161 | orchestrator | 2026-03-09 00:57:56.848166 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-09 00:57:56.848172 | orchestrator | Monday 09 March 2026 00:55:54 +0000 (0:00:01.388) 0:05:28.285 ********** 2026-03-09 00:57:56.848177 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.848182 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.848188 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.848193 | orchestrator | 2026-03-09 00:57:56.848198 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-09 00:57:56.848204 | orchestrator | Monday 09 March 2026 00:55:56 +0000 (0:00:01.582) 0:05:29.867 ********** 2026-03-09 00:57:56.848209 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:56.848215 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:56.848220 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:56.848225 | orchestrator | 2026-03-09 00:57:56.848231 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-09 00:57:56.848236 | orchestrator | Monday 09 March 2026 00:55:59 +0000 (0:00:02.861) 0:05:32.729 ********** 2026-03-09 00:57:56.848247 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:56.848253 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:56.848258 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:56.848263 | orchestrator | 2026-03-09 00:57:56.848269 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-09 00:57:56.848274 | orchestrator | Monday 09 March 2026 00:56:02 +0000 (0:00:03.324) 0:05:36.054 ********** 2026-03-09 00:57:56.848279 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:56.848285 | orchestrator | 2026-03-09 00:57:56.848290 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-09 00:57:56.848296 | orchestrator | Monday 09 March 2026 00:56:04 +0000 (0:00:01.437) 0:05:37.491 ********** 2026-03-09 00:57:56.848302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 00:57:56.848308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 00:57:56.848321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 00:57:56.848327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 00:57:56.848334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 00:57:56.848346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.848351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 00:57:56.848357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 00:57:56.848366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 00:57:56.848374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.848380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 00:57:56.848390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 00:57:56.848395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 00:57:56.848401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 00:57:56.848407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.848413 | orchestrator | 2026-03-09 00:57:56.849023 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-09 00:57:56.849052 | orchestrator | Monday 09 March 2026 00:56:08 +0000 (0:00:03.900) 0:05:41.392 ********** 2026-03-09 00:57:56.849064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-09 00:57:56.849078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 00:57:56.849085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 00:57:56.849091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-09 00:57:56.849097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 00:57:56.849112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 00:57:56.849118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.849127 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.849133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 00:57:56.849139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 00:57:56.849144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.849150 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.849156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-09 00:57:56.849165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 00:57:56.849173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 00:57:56.849182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 00:57:56.849188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:57:56.849193 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.849199 | orchestrator | 2026-03-09 00:57:56.849205 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-09 00:57:56.849210 | orchestrator | Monday 09 March 2026 00:56:09 +0000 (0:00:01.151) 0:05:42.543 ********** 2026-03-09 00:57:56.849216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-09 00:57:56.849222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-09 00:57:56.849228 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.849234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-09 00:57:56.849239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-09 00:57:56.849245 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.849251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-09 00:57:56.849256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-09 00:57:56.849262 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.849267 | orchestrator | 2026-03-09 00:57:56.849276 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-09 00:57:56.849285 | orchestrator | Monday 09 March 2026 00:56:10 +0000 (0:00:01.056) 0:05:43.600 ********** 2026-03-09 00:57:56.849294 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:56.849304 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:56.849313 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:56.849326 | orchestrator | 2026-03-09 00:57:56.849332 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-09 00:57:56.849337 | orchestrator | Monday 09 March 2026 00:56:11 +0000 (0:00:01.297) 0:05:44.898 ********** 2026-03-09 00:57:56.849346 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:56.849352 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:56.849358 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:56.849363 | orchestrator | 2026-03-09 00:57:56.849371 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-09 00:57:56.849377 | orchestrator | Monday 09 March 2026 00:56:13 +0000 (0:00:02.320) 0:05:47.218 ********** 2026-03-09 00:57:56.849382 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:56.849389 | orchestrator | 2026-03-09 00:57:56.849397 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-09 00:57:56.849405 | orchestrator | Monday 09 March 2026 00:56:15 +0000 (0:00:01.860) 0:05:49.079 ********** 2026-03-09 00:57:56.849413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 00:57:56.849423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 00:57:56.849433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 00:57:56.849447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-09 00:57:56.849457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-09 00:57:56.849462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-09 00:57:56.849468 | orchestrator | 2026-03-09 00:57:56.849473 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-09 00:57:56.849478 | orchestrator | Monday 09 March 2026 00:56:21 +0000 (0:00:05.815) 0:05:54.894 ********** 2026-03-09 00:57:56.849483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 00:57:56.849496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-09 00:57:56.849502 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.849507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 00:57:56.849512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-09 00:57:56.849518 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.849523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 00:57:56.849536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-09 00:57:56.849541 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.849546 | orchestrator | 2026-03-09 00:57:56.849551 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-09 00:57:56.849556 | orchestrator | Monday 09 March 2026 00:56:22 +0000 (0:00:00.759) 0:05:55.654 ********** 2026-03-09 00:57:56.849561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.849567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-09 00:57:56.849573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-09 00:57:56.849579 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.849584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.849589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-09 00:57:56.849594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.849599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-09 00:57:56.849610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-09 00:57:56.849615 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.849620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-09 00:57:56.849625 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.849630 | orchestrator | 2026-03-09 00:57:56.849634 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-09 00:57:56.849641 | orchestrator | Monday 09 March 2026 00:56:23 +0000 (0:00:01.465) 0:05:57.120 ********** 2026-03-09 00:57:56.849646 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.849651 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.849657 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.849663 | orchestrator | 2026-03-09 00:57:56.849669 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-09 00:57:56.849675 | orchestrator | Monday 09 March 2026 00:56:24 +0000 (0:00:00.551) 0:05:57.671 ********** 2026-03-09 00:57:56.849687 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.849696 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.849704 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.849713 | orchestrator | 2026-03-09 00:57:56.849725 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-09 00:57:56.849735 | orchestrator | Monday 09 March 2026 00:56:25 +0000 (0:00:01.561) 0:05:59.233 ********** 2026-03-09 00:57:56.849743 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:56.849752 | orchestrator | 2026-03-09 00:57:56.849760 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-09 00:57:56.849768 | orchestrator | Monday 09 March 2026 00:56:27 +0000 (0:00:01.827) 0:06:01.061 ********** 2026-03-09 00:57:56.849777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-09 00:57:56.849787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 00:57:56.849798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:57:56.849804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:57:56.849811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 00:57:56.849823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-09 00:57:56.849830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-09 00:57:56.849853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 00:57:56.849862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 00:57:56.849868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:57:56.849874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:57:56.849886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:57:56.849893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 00:57:56.849899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:57:56.849905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 00:57:56.849915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 00:57:56.849921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-09 00:57:56.849932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:57:56.849938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:57:56.849944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-09 00:57:56.849950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 00:57:56.849960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-09 00:57:56.849966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:57:56.849975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:57:56.849984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-09 00:57:56.849990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 00:57:56.850001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-09 00:57:56.850007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:57:56.850013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:57:56.850038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-09 00:57:56.850043 | orchestrator | 2026-03-09 00:57:56.850051 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-09 00:57:56.850057 | orchestrator | Monday 09 March 2026 00:56:32 +0000 (0:00:04.332) 0:06:05.394 ********** 2026-03-09 00:57:56.850063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-03-09 00:57:56.850087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 00:57:56.850093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:57:56.850098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:57:56.850104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 00:57:56.850143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 00:57:56.850149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-09 00:57:56.850159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-03-09 00:57:56.850164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:57:56.850169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 00:57:56.850174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:57:56.850190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:57:56.850200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-09 00:57:56.850215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:57:56.850225 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.850234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 00:57:56.850242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 00:57:56.850248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-09 00:57:56.850267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:57:56.850273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-03-09 00:57:56.850282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:57:56.850287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 00:57:56.850292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-09 00:57:56.850297 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.850302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:57:56.850307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:57:56.850317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 00:57:56.850367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 00:57:56.850374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-09 00:57:56.850379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:57:56.850384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:57:56.850389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-09 00:57:56.850394 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.850399 | orchestrator | 2026-03-09 00:57:56.850404 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-09 00:57:56.850409 | orchestrator | Monday 09 March 2026 00:56:33 +0000 (0:00:00.995) 0:06:06.389 ********** 2026-03-09 00:57:56.850421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-09 00:57:56.850430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-09 00:57:56.850436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.850441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.850446 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.850452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-09 00:57:56.850457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-09 00:57:56.850462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.850467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.850472 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.850477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-09 00:57:56.850482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-09 00:57:56.850487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.850501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-09 00:57:56.850507 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.850511 | orchestrator | 2026-03-09 00:57:56.850516 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-09 00:57:56.850521 | orchestrator | Monday 09 March 2026 00:56:34 +0000 (0:00:01.363) 0:06:07.753 ********** 2026-03-09 00:57:56.850526 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.850531 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.850536 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.850540 | orchestrator | 2026-03-09 00:57:56.850545 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-09 00:57:56.850550 | orchestrator | Monday 09 March 2026 00:56:34 +0000 (0:00:00.494) 0:06:08.247 ********** 2026-03-09 00:57:56.850555 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.850559 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.850564 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.850569 | orchestrator | 2026-03-09 00:57:56.850574 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-09 00:57:56.850579 | orchestrator | Monday 09 March 2026 00:56:36 +0000 (0:00:01.412) 0:06:09.659 ********** 2026-03-09 00:57:56.850583 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:56.850588 | orchestrator | 2026-03-09 00:57:56.850593 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-09 00:57:56.850598 | orchestrator | Monday 09 March 2026 00:56:37 +0000 (0:00:01.435) 0:06:11.095 ********** 2026-03-09 00:57:56.850603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-09 00:57:56.850609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-09 00:57:56.850623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-09 00:57:56.850633 | orchestrator | 2026-03-09 00:57:56.850644 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-09 00:57:56.850653 | orchestrator | Monday 09 March 2026 00:56:40 +0000 (0:00:02.757) 0:06:13.852 ********** 2026-03-09 00:57:56.850661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-09 00:57:56.850670 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.850680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-09 00:57:56.850690 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.850699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-09 00:57:56.850714 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.850722 | orchestrator | 2026-03-09 00:57:56.850731 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-09 00:57:56.850737 | orchestrator | Monday 09 March 2026 00:56:40 +0000 (0:00:00.453) 0:06:14.306 ********** 2026-03-09 00:57:56.850771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-09 00:57:56.850776 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.850781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-09 00:57:56.850786 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.850791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-09 00:57:56.850796 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.850801 | orchestrator | 2026-03-09 00:57:56.850810 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-09 00:57:56.850815 | orchestrator | Monday 09 March 2026 00:56:41 +0000 (0:00:00.681) 0:06:14.987 ********** 2026-03-09 00:57:56.850823 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.850828 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.850849 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.850855 | orchestrator | 2026-03-09 00:57:56.850860 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-09 00:57:56.850865 | orchestrator | Monday 09 March 2026 00:56:42 +0000 (0:00:01.242) 0:06:16.230 ********** 2026-03-09 00:57:56.850869 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.850874 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.850879 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.850884 | orchestrator | 2026-03-09 00:57:56.850888 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-09 00:57:56.850893 | orchestrator | Monday 09 March 2026 00:56:44 +0000 (0:00:01.521) 0:06:17.751 ********** 2026-03-09 00:57:56.850898 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:56.850903 | orchestrator | 2026-03-09 00:57:56.850907 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-09 00:57:56.850912 | orchestrator | Monday 09 March 2026 00:56:45 +0000 (0:00:01.540) 0:06:19.292 ********** 2026-03-09 00:57:56.850917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-03-09 00:57:56.850927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-03-09 00:57:56.850933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-03-09 00:57:56.850948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-09 00:57:56.850954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-09 00:57:56.850963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-09 00:57:56.850968 | orchestrator | 2026-03-09 00:57:56.850973 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-09 00:57:56.850978 | orchestrator | Monday 09 March 2026 00:56:53 +0000 (0:00:07.181) 0:06:26.474 ********** 2026-03-09 00:57:56.850986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-03-09 00:57:56.850995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-09 00:57:56.851000 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.851005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-03-09 00:57:56.851015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-09 00:57:56.851020 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.851031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-03-09 00:57:56.851037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-09 00:57:56.851042 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.851047 | orchestrator | 2026-03-09 00:57:56.851054 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-09 00:57:56.851061 | orchestrator | Monday 09 March 2026 00:56:54 +0000 (0:00:01.145) 0:06:27.619 ********** 2026-03-09 00:57:56.851074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-09 00:57:56.851083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-09 00:57:56.851091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-09 00:57:56.851100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-09 00:57:56.851108 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.851117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-09 00:57:56.851125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-09 00:57:56.851134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-09 00:57:56.851142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-09 00:57:56.851151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-09 00:57:56.851160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-09 00:57:56.851165 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.851173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-09 00:57:56.851178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-09 00:57:56.851183 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.851188 | orchestrator | 2026-03-09 00:57:56.851193 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-09 00:57:56.851198 | orchestrator | Monday 09 March 2026 00:56:55 +0000 (0:00:01.002) 0:06:28.621 ********** 2026-03-09 00:57:56.851207 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:56.851211 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:56.851216 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:56.851221 | orchestrator | 2026-03-09 00:57:56.851226 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-09 00:57:56.851231 | orchestrator | Monday 09 March 2026 00:56:56 +0000 (0:00:01.296) 0:06:29.918 ********** 2026-03-09 00:57:56.851235 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:56.851240 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:56.851245 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:56.851250 | orchestrator | 2026-03-09 00:57:56.851254 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-09 00:57:56.851259 | orchestrator | Monday 09 March 2026 00:56:58 +0000 (0:00:02.321) 0:06:32.239 ********** 2026-03-09 00:57:56.851264 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.851269 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.851273 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.851278 | orchestrator | 2026-03-09 00:57:56.851283 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-09 00:57:56.851288 | orchestrator | Monday 09 March 2026 00:56:59 +0000 (0:00:00.347) 0:06:32.587 ********** 2026-03-09 00:57:56.851292 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.851297 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.851302 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.851307 | orchestrator | 2026-03-09 00:57:56.851312 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-09 00:57:56.851316 | orchestrator | Monday 09 March 2026 00:56:59 +0000 (0:00:00.702) 0:06:33.289 ********** 2026-03-09 00:57:56.851321 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.851326 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.851331 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.851335 | orchestrator | 2026-03-09 00:57:56.851340 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-09 00:57:56.851345 | orchestrator | Monday 09 March 2026 00:57:00 +0000 (0:00:00.353) 0:06:33.643 ********** 2026-03-09 00:57:56.851350 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.851354 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.851359 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.851364 | orchestrator | 2026-03-09 00:57:56.851369 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-09 00:57:56.851373 | orchestrator | Monday 09 March 2026 00:57:00 +0000 (0:00:00.359) 0:06:34.003 ********** 2026-03-09 00:57:56.851379 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.851383 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.851388 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.851393 | orchestrator | 2026-03-09 00:57:56.851398 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-03-09 00:57:56.851402 | orchestrator | Monday 09 March 2026 00:57:00 +0000 (0:00:00.364) 0:06:34.367 ********** 2026-03-09 00:57:56.851407 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:56.851412 | orchestrator | 2026-03-09 00:57:56.851417 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-03-09 00:57:56.851422 | orchestrator | Monday 09 March 2026 00:57:03 +0000 (0:00:02.071) 0:06:36.439 ********** 2026-03-09 00:57:56.851427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-09 00:57:56.851442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-09 00:57:56.851448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-09 00:57:56.851453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:57:56.851458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:57:56.851463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:57:56.851468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-09 00:57:56.851474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-09 00:57:56.851487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-09 00:57:56.851492 | orchestrator | 2026-03-09 00:57:56.851498 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-03-09 00:57:56.851503 | orchestrator | Monday 09 March 2026 00:57:05 +0000 (0:00:02.610) 0:06:39.049 ********** 2026-03-09 00:57:56.851511 | orchestrator | changed: [testbed-node-0] => { 2026-03-09 00:57:56.851519 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:57:56.851528 | orchestrator | } 2026-03-09 00:57:56.851536 | orchestrator | changed: [testbed-node-1] => { 2026-03-09 00:57:56.851545 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:57:56.851553 | orchestrator | } 2026-03-09 00:57:56.851560 | orchestrator | changed: [testbed-node-2] => { 2026-03-09 00:57:56.851567 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 00:57:56.851572 | orchestrator | } 2026-03-09 00:57:56.851577 | orchestrator | 2026-03-09 00:57:56.851582 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-09 00:57:56.851587 | orchestrator | Monday 09 March 2026 00:57:06 +0000 (0:00:00.788) 0:06:39.838 ********** 2026-03-09 00:57:56.851592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-09 00:57:56.851597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:57:56.851602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:57:56.851607 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.851612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-09 00:57:56.851621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:57:56.851633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:57:56.851638 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.851643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-09 00:57:56.851648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:57:56.851653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:57:56.851658 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.851663 | orchestrator | 2026-03-09 00:57:56.851668 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-09 00:57:56.851673 | orchestrator | Monday 09 March 2026 00:57:08 +0000 (0:00:01.700) 0:06:41.539 ********** 2026-03-09 00:57:56.851678 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:56.851683 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:56.851691 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:56.851696 | orchestrator | 2026-03-09 00:57:56.851700 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-09 00:57:56.851705 | orchestrator | Monday 09 March 2026 00:57:08 +0000 (0:00:00.828) 0:06:42.368 ********** 2026-03-09 00:57:56.851710 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:56.851715 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:56.851719 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:56.851726 | orchestrator | 2026-03-09 00:57:56.851734 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-09 00:57:56.851742 | orchestrator | Monday 09 March 2026 00:57:09 +0000 (0:00:00.469) 0:06:42.837 ********** 2026-03-09 00:57:56.851749 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:56.851755 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:56.851763 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:56.851771 | orchestrator | 2026-03-09 00:57:56.851779 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-09 00:57:56.851787 | orchestrator | Monday 09 March 2026 00:57:10 +0000 (0:00:01.076) 0:06:43.913 ********** 2026-03-09 00:57:56.851794 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:56.851803 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:56.851809 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:56.851813 | orchestrator | 2026-03-09 00:57:56.851818 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-09 00:57:56.851823 | orchestrator | Monday 09 March 2026 00:57:12 +0000 (0:00:01.498) 0:06:45.412 ********** 2026-03-09 00:57:56.851828 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:56.851850 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:56.851855 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:56.851860 | orchestrator | 2026-03-09 00:57:56.851864 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-09 00:57:56.851869 | orchestrator | Monday 09 March 2026 00:57:13 +0000 (0:00:01.073) 0:06:46.486 ********** 2026-03-09 00:57:56.851874 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:56.851879 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:56.851883 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:56.851888 | orchestrator | 2026-03-09 00:57:56.851894 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-09 00:57:56.851903 | orchestrator | Monday 09 March 2026 00:57:18 +0000 (0:00:05.571) 0:06:52.057 ********** 2026-03-09 00:57:56.851911 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:56.851919 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:56.851928 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:56.851937 | orchestrator | 2026-03-09 00:57:56.851945 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-09 00:57:56.851959 | orchestrator | Monday 09 March 2026 00:57:22 +0000 (0:00:03.857) 0:06:55.915 ********** 2026-03-09 00:57:56.851968 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:56.851977 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:56.851993 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:56.852002 | orchestrator | 2026-03-09 00:57:56.852011 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-09 00:57:56.852019 | orchestrator | Monday 09 March 2026 00:57:38 +0000 (0:00:16.340) 0:07:12.256 ********** 2026-03-09 00:57:56.852028 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:56.852034 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:56.852039 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:56.852044 | orchestrator | 2026-03-09 00:57:56.852049 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-09 00:57:56.852054 | orchestrator | Monday 09 March 2026 00:57:40 +0000 (0:00:01.302) 0:07:13.558 ********** 2026-03-09 00:57:56.852059 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:56.852063 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:56.852068 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:56.852073 | orchestrator | 2026-03-09 00:57:56.852078 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-09 00:57:56.852087 | orchestrator | Monday 09 March 2026 00:57:45 +0000 (0:00:05.194) 0:07:18.753 ********** 2026-03-09 00:57:56.852091 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.852096 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.852101 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.852106 | orchestrator | 2026-03-09 00:57:56.852110 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-09 00:57:56.852115 | orchestrator | Monday 09 March 2026 00:57:45 +0000 (0:00:00.401) 0:07:19.154 ********** 2026-03-09 00:57:56.852120 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.852125 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.852129 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.852134 | orchestrator | 2026-03-09 00:57:56.852139 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-09 00:57:56.852144 | orchestrator | Monday 09 March 2026 00:57:46 +0000 (0:00:00.384) 0:07:19.539 ********** 2026-03-09 00:57:56.852149 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.852153 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.852158 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.852163 | orchestrator | 2026-03-09 00:57:56.852168 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-09 00:57:56.852173 | orchestrator | Monday 09 March 2026 00:57:46 +0000 (0:00:00.779) 0:07:20.318 ********** 2026-03-09 00:57:56.852177 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.852182 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.852187 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.852192 | orchestrator | 2026-03-09 00:57:56.852197 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-09 00:57:56.852201 | orchestrator | Monday 09 March 2026 00:57:47 +0000 (0:00:00.427) 0:07:20.745 ********** 2026-03-09 00:57:56.852206 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.852211 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.852216 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.852221 | orchestrator | 2026-03-09 00:57:56.852225 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-09 00:57:56.852230 | orchestrator | Monday 09 March 2026 00:57:47 +0000 (0:00:00.403) 0:07:21.149 ********** 2026-03-09 00:57:56.852235 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:56.852240 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:56.852244 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:56.852249 | orchestrator | 2026-03-09 00:57:56.852254 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-09 00:57:56.852259 | orchestrator | Monday 09 March 2026 00:57:48 +0000 (0:00:00.410) 0:07:21.559 ********** 2026-03-09 00:57:56.852264 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:56.852268 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:56.852273 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:56.852278 | orchestrator | 2026-03-09 00:57:56.852283 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-09 00:57:56.852288 | orchestrator | Monday 09 March 2026 00:57:53 +0000 (0:00:05.326) 0:07:26.886 ********** 2026-03-09 00:57:56.852292 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:56.852297 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:56.852302 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:56.852307 | orchestrator | 2026-03-09 00:57:56.852312 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:57:56.852317 | orchestrator | testbed-node-0 : ok=127  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-03-09 00:57:56.852322 | orchestrator | testbed-node-1 : ok=126  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-03-09 00:57:56.852327 | orchestrator | testbed-node-2 : ok=126  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-03-09 00:57:56.852337 | orchestrator | 2026-03-09 00:57:56.852345 | orchestrator | 2026-03-09 00:57:56.852353 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:57:56.852361 | orchestrator | Monday 09 March 2026 00:57:54 +0000 (0:00:00.915) 0:07:27.801 ********** 2026-03-09 00:57:56.852369 | orchestrator | =============================================================================== 2026-03-09 00:57:56.852378 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 16.34s 2026-03-09 00:57:56.852386 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 7.40s 2026-03-09 00:57:56.852394 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 7.18s 2026-03-09 00:57:56.852403 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 6.11s 2026-03-09 00:57:56.852411 | orchestrator | haproxy-config : Copying over ceph-rgw haproxy config ----2026-03-09 00:57:56 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 00:57:56.852420 | orchestrator | 2026-03-09 00:57:56 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:56.852425 | orchestrator | --------------- 6.00s 2026-03-09 00:57:56.852430 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.82s 2026-03-09 00:57:56.852435 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.63s 2026-03-09 00:57:56.852440 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 5.57s 2026-03-09 00:57:56.852444 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 5.33s 2026-03-09 00:57:56.852449 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 5.32s 2026-03-09 00:57:56.852454 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 5.24s 2026-03-09 00:57:56.852459 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 5.19s 2026-03-09 00:57:56.852463 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 5.10s 2026-03-09 00:57:56.852468 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.06s 2026-03-09 00:57:56.852473 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 5.03s 2026-03-09 00:57:56.852478 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.95s 2026-03-09 00:57:56.852482 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 4.71s 2026-03-09 00:57:56.852487 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 4.66s 2026-03-09 00:57:56.852492 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.66s 2026-03-09 00:57:56.852497 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 4.36s 2026-03-09 00:57:59.880933 | orchestrator | 2026-03-09 00:57:59 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 00:57:59.881303 | orchestrator | 2026-03-09 00:57:59 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:57:59.883655 | orchestrator | 2026-03-09 00:57:59 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 00:57:59.883710 | orchestrator | 2026-03-09 00:57:59 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:02.925805 | orchestrator | 2026-03-09 00:58:02 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 00:58:02.926882 | orchestrator | 2026-03-09 00:58:02 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:58:02.928420 | orchestrator | 2026-03-09 00:58:02 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 00:58:02.928797 | orchestrator | 2026-03-09 00:58:02 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:05.968919 | orchestrator | 2026-03-09 00:58:05 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 00:58:05.969037 | orchestrator | 2026-03-09 00:58:05 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:58:05.970081 | orchestrator | 2026-03-09 00:58:05 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 00:58:05.970332 | orchestrator | 2026-03-09 00:58:05 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:09.005103 | orchestrator | 2026-03-09 00:58:09 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 00:58:09.005570 | orchestrator | 2026-03-09 00:58:09 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:58:09.006226 | orchestrator | 2026-03-09 00:58:09 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 00:58:09.006545 | orchestrator | 2026-03-09 00:58:09 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:12.054218 | orchestrator | 2026-03-09 00:58:12 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 00:58:12.054391 | orchestrator | 2026-03-09 00:58:12 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:58:12.055516 | orchestrator | 2026-03-09 00:58:12 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 00:58:12.055569 | orchestrator | 2026-03-09 00:58:12 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:15.101866 | orchestrator | 2026-03-09 00:58:15 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 00:58:15.102762 | orchestrator | 2026-03-09 00:58:15 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:58:15.105318 | orchestrator | 2026-03-09 00:58:15 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 00:58:15.105368 | orchestrator | 2026-03-09 00:58:15 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:18.225566 | orchestrator | 2026-03-09 00:58:18 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 00:58:18.229720 | orchestrator | 2026-03-09 00:58:18 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:58:18.231487 | orchestrator | 2026-03-09 00:58:18 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 00:58:18.231626 | orchestrator | 2026-03-09 00:58:18 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:21.270575 | orchestrator | 2026-03-09 00:58:21 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 00:58:21.271379 | orchestrator | 2026-03-09 00:58:21 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:58:21.272682 | orchestrator | 2026-03-09 00:58:21 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 00:58:21.272737 | orchestrator | 2026-03-09 00:58:21 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:24.308158 | orchestrator | 2026-03-09 00:58:24 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 00:58:24.308508 | orchestrator | 2026-03-09 00:58:24 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:58:24.309405 | orchestrator | 2026-03-09 00:58:24 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 00:58:24.309521 | orchestrator | 2026-03-09 00:58:24 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:27.349360 | orchestrator | 2026-03-09 00:58:27 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 00:58:27.349755 | orchestrator | 2026-03-09 00:58:27 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:58:27.350566 | orchestrator | 2026-03-09 00:58:27 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 00:58:27.350662 | orchestrator | 2026-03-09 00:58:27 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:30.396285 | orchestrator | 2026-03-09 00:58:30 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 00:58:30.397899 | orchestrator | 2026-03-09 00:58:30 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:58:30.400815 | orchestrator | 2026-03-09 00:58:30 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 00:58:30.401974 | orchestrator | 2026-03-09 00:58:30 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:33.444432 | orchestrator | 2026-03-09 00:58:33 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 00:58:33.447835 | orchestrator | 2026-03-09 00:58:33 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:58:33.450167 | orchestrator | 2026-03-09 00:58:33 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 00:58:33.452549 | orchestrator | 2026-03-09 00:58:33 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:36.492259 | orchestrator | 2026-03-09 00:58:36 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 00:58:36.492656 | orchestrator | 2026-03-09 00:58:36 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:58:36.494119 | orchestrator | 2026-03-09 00:58:36 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 00:58:36.494153 | orchestrator | 2026-03-09 00:58:36 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:39.549959 | orchestrator | 2026-03-09 00:58:39 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 00:58:39.550357 | orchestrator | 2026-03-09 00:58:39 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:58:39.551686 | orchestrator | 2026-03-09 00:58:39 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 00:58:39.551717 | orchestrator | 2026-03-09 00:58:39 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:42.586748 | orchestrator | 2026-03-09 00:58:42 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 00:58:42.590252 | orchestrator | 2026-03-09 00:58:42 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:58:42.591523 | orchestrator | 2026-03-09 00:58:42 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 00:58:42.591760 | orchestrator | 2026-03-09 00:58:42 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:45.625290 | orchestrator | 2026-03-09 00:58:45 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 00:58:45.625658 | orchestrator | 2026-03-09 00:58:45 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:58:45.626468 | orchestrator | 2026-03-09 00:58:45 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 00:58:45.626492 | orchestrator | 2026-03-09 00:58:45 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:48.664400 | orchestrator | 2026-03-09 00:58:48 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 00:58:48.665114 | orchestrator | 2026-03-09 00:58:48 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:58:48.666239 | orchestrator | 2026-03-09 00:58:48 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 00:58:48.666274 | orchestrator | 2026-03-09 00:58:48 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:51.746856 | orchestrator | 2026-03-09 00:58:51 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 00:58:51.747915 | orchestrator | 2026-03-09 00:58:51 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:58:51.750921 | orchestrator | 2026-03-09 00:58:51 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 00:58:51.751131 | orchestrator | 2026-03-09 00:58:51 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:54.798254 | orchestrator | 2026-03-09 00:58:54 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 00:58:54.799693 | orchestrator | 2026-03-09 00:58:54 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:58:54.801496 | orchestrator | 2026-03-09 00:58:54 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 00:58:54.801518 | orchestrator | 2026-03-09 00:58:54 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:57.849992 | orchestrator | 2026-03-09 00:58:57 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 00:58:57.853236 | orchestrator | 2026-03-09 00:58:57 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:58:57.855536 | orchestrator | 2026-03-09 00:58:57 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 00:58:57.855606 | orchestrator | 2026-03-09 00:58:57 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:00.899666 | orchestrator | 2026-03-09 00:59:00 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 00:59:00.902690 | orchestrator | 2026-03-09 00:59:00 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:59:00.906292 | orchestrator | 2026-03-09 00:59:00 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 00:59:00.906370 | orchestrator | 2026-03-09 00:59:00 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:03.959581 | orchestrator | 2026-03-09 00:59:03 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 00:59:03.961429 | orchestrator | 2026-03-09 00:59:03 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:59:03.963914 | orchestrator | 2026-03-09 00:59:03 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 00:59:03.963953 | orchestrator | 2026-03-09 00:59:03 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:07.017433 | orchestrator | 2026-03-09 00:59:07 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 00:59:07.019153 | orchestrator | 2026-03-09 00:59:07 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:59:07.024346 | orchestrator | 2026-03-09 00:59:07 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 00:59:07.024416 | orchestrator | 2026-03-09 00:59:07 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:10.073055 | orchestrator | 2026-03-09 00:59:10 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 00:59:10.074791 | orchestrator | 2026-03-09 00:59:10 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:59:10.077542 | orchestrator | 2026-03-09 00:59:10 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 00:59:10.077584 | orchestrator | 2026-03-09 00:59:10 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:13.135491 | orchestrator | 2026-03-09 00:59:13 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 00:59:13.137727 | orchestrator | 2026-03-09 00:59:13 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:59:13.139318 | orchestrator | 2026-03-09 00:59:13 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 00:59:13.139370 | orchestrator | 2026-03-09 00:59:13 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:16.194349 | orchestrator | 2026-03-09 00:59:16 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 00:59:16.197366 | orchestrator | 2026-03-09 00:59:16 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:59:16.197712 | orchestrator | 2026-03-09 00:59:16 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 00:59:16.197784 | orchestrator | 2026-03-09 00:59:16 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:19.240396 | orchestrator | 2026-03-09 00:59:19 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 00:59:19.242733 | orchestrator | 2026-03-09 00:59:19 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:59:19.245386 | orchestrator | 2026-03-09 00:59:19 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 00:59:19.245439 | orchestrator | 2026-03-09 00:59:19 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:22.286316 | orchestrator | 2026-03-09 00:59:22 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 00:59:22.287397 | orchestrator | 2026-03-09 00:59:22 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:59:22.287767 | orchestrator | 2026-03-09 00:59:22 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 00:59:22.287795 | orchestrator | 2026-03-09 00:59:22 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:25.341249 | orchestrator | 2026-03-09 00:59:25 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 00:59:25.341869 | orchestrator | 2026-03-09 00:59:25 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:59:25.342903 | orchestrator | 2026-03-09 00:59:25 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 00:59:25.342924 | orchestrator | 2026-03-09 00:59:25 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:28.405982 | orchestrator | 2026-03-09 00:59:28 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 00:59:28.408165 | orchestrator | 2026-03-09 00:59:28 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:59:28.412078 | orchestrator | 2026-03-09 00:59:28 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 00:59:28.413154 | orchestrator | 2026-03-09 00:59:28 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:31.480139 | orchestrator | 2026-03-09 00:59:31 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 00:59:31.484910 | orchestrator | 2026-03-09 00:59:31 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:59:31.485843 | orchestrator | 2026-03-09 00:59:31 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 00:59:31.485912 | orchestrator | 2026-03-09 00:59:31 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:34.541291 | orchestrator | 2026-03-09 00:59:34 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 00:59:34.543047 | orchestrator | 2026-03-09 00:59:34 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:59:34.544624 | orchestrator | 2026-03-09 00:59:34 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 00:59:34.544687 | orchestrator | 2026-03-09 00:59:34 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:37.584678 | orchestrator | 2026-03-09 00:59:37 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 00:59:37.587583 | orchestrator | 2026-03-09 00:59:37 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:59:37.589391 | orchestrator | 2026-03-09 00:59:37 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 00:59:37.589418 | orchestrator | 2026-03-09 00:59:37 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:40.650444 | orchestrator | 2026-03-09 00:59:40 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 00:59:40.654437 | orchestrator | 2026-03-09 00:59:40 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:59:40.656904 | orchestrator | 2026-03-09 00:59:40 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 00:59:40.657418 | orchestrator | 2026-03-09 00:59:40 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:43.710635 | orchestrator | 2026-03-09 00:59:43 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 00:59:43.718918 | orchestrator | 2026-03-09 00:59:43 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:59:43.722333 | orchestrator | 2026-03-09 00:59:43 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 00:59:43.722420 | orchestrator | 2026-03-09 00:59:43 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:46.768779 | orchestrator | 2026-03-09 00:59:46 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 00:59:46.770851 | orchestrator | 2026-03-09 00:59:46 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state STARTED 2026-03-09 00:59:46.772177 | orchestrator | 2026-03-09 00:59:46 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 00:59:46.772220 | orchestrator | 2026-03-09 00:59:46 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:49.809826 | orchestrator | 2026-03-09 00:59:49 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 00:59:49.813209 | orchestrator | 2026-03-09 00:59:49 | INFO  | Task 88f4461b-111e-4323-bd04-a18d990f2de6 is in state SUCCESS 2026-03-09 00:59:49.815106 | orchestrator | 2026-03-09 00:59:49.815163 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-09 00:59:49.815179 | orchestrator | 2.16.14 2026-03-09 00:59:49.815191 | orchestrator | 2026-03-09 00:59:49.815202 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-03-09 00:59:49.815212 | orchestrator | 2026-03-09 00:59:49.815221 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-09 00:59:49.815231 | orchestrator | Monday 09 March 2026 00:47:22 +0000 (0:00:00.758) 0:00:00.758 ********** 2026-03-09 00:59:49.815241 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:49.815270 | orchestrator | 2026-03-09 00:59:49.815281 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-09 00:59:49.815291 | orchestrator | Monday 09 March 2026 00:47:24 +0000 (0:00:01.190) 0:00:01.948 ********** 2026-03-09 00:59:49.815305 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.815315 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.815324 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.815334 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.815346 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.815359 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.815368 | orchestrator | 2026-03-09 00:59:49.815378 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-09 00:59:49.815387 | orchestrator | Monday 09 March 2026 00:47:25 +0000 (0:00:01.563) 0:00:03.512 ********** 2026-03-09 00:59:49.815397 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.815436 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.815442 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.815451 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.815460 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.815468 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.815477 | orchestrator | 2026-03-09 00:59:49.815486 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-09 00:59:49.815496 | orchestrator | Monday 09 March 2026 00:47:26 +0000 (0:00:01.009) 0:00:04.522 ********** 2026-03-09 00:59:49.815504 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.815511 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.815520 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.815528 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.815538 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.815546 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.815555 | orchestrator | 2026-03-09 00:59:49.815563 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-09 00:59:49.815571 | orchestrator | Monday 09 March 2026 00:47:27 +0000 (0:00:00.853) 0:00:05.375 ********** 2026-03-09 00:59:49.815580 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.815588 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.815596 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.815605 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.815614 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.815623 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.815631 | orchestrator | 2026-03-09 00:59:49.815640 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-09 00:59:49.815649 | orchestrator | Monday 09 March 2026 00:47:28 +0000 (0:00:00.722) 0:00:06.098 ********** 2026-03-09 00:59:49.815658 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.815668 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.815848 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.815859 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.815867 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.815873 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.815880 | orchestrator | 2026-03-09 00:59:49.815886 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-09 00:59:49.815893 | orchestrator | Monday 09 March 2026 00:47:29 +0000 (0:00:00.929) 0:00:07.027 ********** 2026-03-09 00:59:49.815899 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.815906 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.815920 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.815929 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.815939 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.815948 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.815956 | orchestrator | 2026-03-09 00:59:49.815966 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-09 00:59:49.815975 | orchestrator | Monday 09 March 2026 00:47:30 +0000 (0:00:00.876) 0:00:07.904 ********** 2026-03-09 00:59:49.815985 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.815995 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.816014 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.816024 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.816032 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.816041 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.816050 | orchestrator | 2026-03-09 00:59:49.816059 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-09 00:59:49.816068 | orchestrator | Monday 09 March 2026 00:47:30 +0000 (0:00:00.783) 0:00:08.688 ********** 2026-03-09 00:59:49.816078 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.816087 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.816095 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.816104 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.816135 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.816143 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.816152 | orchestrator | 2026-03-09 00:59:49.816160 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-09 00:59:49.816169 | orchestrator | Monday 09 March 2026 00:47:32 +0000 (0:00:01.273) 0:00:09.962 ********** 2026-03-09 00:59:49.816175 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-09 00:59:49.816181 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-09 00:59:49.816187 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-09 00:59:49.816192 | orchestrator | 2026-03-09 00:59:49.816198 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-09 00:59:49.816203 | orchestrator | Monday 09 March 2026 00:47:32 +0000 (0:00:00.692) 0:00:10.655 ********** 2026-03-09 00:59:49.816209 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.816214 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.816219 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.816239 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.816250 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.816264 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.816273 | orchestrator | 2026-03-09 00:59:49.816282 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-09 00:59:49.816291 | orchestrator | Monday 09 March 2026 00:47:34 +0000 (0:00:02.021) 0:00:12.676 ********** 2026-03-09 00:59:49.816298 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-09 00:59:49.816303 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-09 00:59:49.816308 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-09 00:59:49.816318 | orchestrator | 2026-03-09 00:59:49.816324 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-09 00:59:49.816329 | orchestrator | Monday 09 March 2026 00:47:37 +0000 (0:00:02.665) 0:00:15.341 ********** 2026-03-09 00:59:49.816335 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-09 00:59:49.816340 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-09 00:59:49.816346 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-09 00:59:49.816394 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.816400 | orchestrator | 2026-03-09 00:59:49.816405 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-09 00:59:49.816410 | orchestrator | Monday 09 March 2026 00:47:38 +0000 (0:00:01.328) 0:00:16.670 ********** 2026-03-09 00:59:49.816418 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.816425 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.816436 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.816442 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.816448 | orchestrator | 2026-03-09 00:59:49.816453 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-09 00:59:49.816459 | orchestrator | Monday 09 March 2026 00:47:40 +0000 (0:00:01.677) 0:00:18.348 ********** 2026-03-09 00:59:49.816465 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.816476 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.816482 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.816487 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.816493 | orchestrator | 2026-03-09 00:59:49.816498 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-09 00:59:49.816504 | orchestrator | Monday 09 March 2026 00:47:41 +0000 (0:00:00.728) 0:00:19.076 ********** 2026-03-09 00:59:49.816517 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-09 00:47:35.690608', 'end': '2026-03-09 00:47:35.793003', 'delta': '0:00:00.102395', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.816525 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-09 00:47:36.481208', 'end': '2026-03-09 00:47:36.599109', 'delta': '0:00:00.117901', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.816531 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-09 00:47:37.237567', 'end': '2026-03-09 00:47:37.346985', 'delta': '0:00:00.109418', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.816540 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.816546 | orchestrator | 2026-03-09 00:59:49.816551 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-09 00:59:49.816557 | orchestrator | Monday 09 March 2026 00:47:41 +0000 (0:00:00.368) 0:00:19.445 ********** 2026-03-09 00:59:49.816562 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.816568 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.816573 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.816579 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.816584 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.816589 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.816595 | orchestrator | 2026-03-09 00:59:49.816602 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-09 00:59:49.816610 | orchestrator | Monday 09 March 2026 00:47:45 +0000 (0:00:04.118) 0:00:23.563 ********** 2026-03-09 00:59:49.816616 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-09 00:59:49.816622 | orchestrator | 2026-03-09 00:59:49.816627 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-09 00:59:49.816632 | orchestrator | Monday 09 March 2026 00:47:47 +0000 (0:00:01.343) 0:00:24.906 ********** 2026-03-09 00:59:49.816638 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.816643 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.816648 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.816654 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.816662 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.816669 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.816678 | orchestrator | 2026-03-09 00:59:49.816686 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-09 00:59:49.816700 | orchestrator | Monday 09 March 2026 00:47:49 +0000 (0:00:02.433) 0:00:27.339 ********** 2026-03-09 00:59:49.816726 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.816735 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.816816 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.816826 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.816835 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.816844 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.816853 | orchestrator | 2026-03-09 00:59:49.816863 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-09 00:59:49.816872 | orchestrator | Monday 09 March 2026 00:47:52 +0000 (0:00:02.717) 0:00:30.057 ********** 2026-03-09 00:59:49.816881 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.816891 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.816901 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.816910 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.816916 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.816922 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.816927 | orchestrator | 2026-03-09 00:59:49.816933 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-09 00:59:49.816938 | orchestrator | Monday 09 March 2026 00:47:54 +0000 (0:00:02.400) 0:00:32.457 ********** 2026-03-09 00:59:49.816944 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.816949 | orchestrator | 2026-03-09 00:59:49.816955 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-09 00:59:49.816960 | orchestrator | Monday 09 March 2026 00:47:54 +0000 (0:00:00.173) 0:00:32.630 ********** 2026-03-09 00:59:49.816965 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.817002 | orchestrator | 2026-03-09 00:59:49.817008 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-09 00:59:49.817014 | orchestrator | Monday 09 March 2026 00:47:55 +0000 (0:00:00.511) 0:00:33.142 ********** 2026-03-09 00:59:49.817019 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.817025 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.817030 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.817041 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.817047 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.817052 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.817058 | orchestrator | 2026-03-09 00:59:49.817063 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-09 00:59:49.817069 | orchestrator | Monday 09 March 2026 00:47:56 +0000 (0:00:01.400) 0:00:34.543 ********** 2026-03-09 00:59:49.817074 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.817080 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.817085 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.817091 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.817096 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.817102 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.817107 | orchestrator | 2026-03-09 00:59:49.817113 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-09 00:59:49.817118 | orchestrator | Monday 09 March 2026 00:47:58 +0000 (0:00:01.590) 0:00:36.133 ********** 2026-03-09 00:59:49.817123 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.817129 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.817134 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.817140 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.817145 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.817150 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.817156 | orchestrator | 2026-03-09 00:59:49.817161 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-09 00:59:49.817167 | orchestrator | Monday 09 March 2026 00:47:59 +0000 (0:00:01.426) 0:00:37.559 ********** 2026-03-09 00:59:49.817172 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.817177 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.817183 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.817188 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.817194 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.817199 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.817204 | orchestrator | 2026-03-09 00:59:49.817210 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-09 00:59:49.817215 | orchestrator | Monday 09 March 2026 00:48:01 +0000 (0:00:01.732) 0:00:39.292 ********** 2026-03-09 00:59:49.817221 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.817226 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.817231 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.817239 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.817250 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.817265 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.817274 | orchestrator | 2026-03-09 00:59:49.817284 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-09 00:59:49.817293 | orchestrator | Monday 09 March 2026 00:48:03 +0000 (0:00:01.769) 0:00:41.062 ********** 2026-03-09 00:59:49.817304 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.817314 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.817324 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.817331 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.817337 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.817363 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.817370 | orchestrator | 2026-03-09 00:59:49.817375 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-09 00:59:49.817381 | orchestrator | Monday 09 March 2026 00:48:05 +0000 (0:00:02.030) 0:00:43.092 ********** 2026-03-09 00:59:49.817393 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.817398 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.817404 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.817409 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.817415 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.817420 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.817425 | orchestrator | 2026-03-09 00:59:49.817431 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-09 00:59:49.817440 | orchestrator | Monday 09 March 2026 00:48:06 +0000 (0:00:01.321) 0:00:44.414 ********** 2026-03-09 00:59:49.817479 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0b4a24c5--7164--5e55--92cc--433a48be10d0-osd--block--0b4a24c5--7164--5e55--92cc--433a48be10d0', 'dm-uuid-LVM-xoYiAr1LbGAgQx9YTSY4h87WEEAMBYG6KvCGKgRKiE7cyM04uk8bDW8y2n0svaKJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.817488 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--07cae8b8--d309--58e5--9f3f--3806cd3fe573-osd--block--07cae8b8--d309--58e5--9f3f--3806cd3fe573', 'dm-uuid-LVM-gl3VxdhyGcL39CYSAZ2UylTo0uqBhzMRbQXrveI7l53qqf8ztRDRHEHmQd5yahj6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.817500 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.817506 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9c74837a--43e3--5ea9--9fe0--5cec11260b17-osd--block--9c74837a--43e3--5ea9--9fe0--5cec11260b17', 'dm-uuid-LVM-r6O3uel0WqqZv6vhGYFFKRbvfWkcwOjX1gmhQS9oeLec7ivOjKlRCcgI2KpJCYRg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.817578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.817667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.817676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.817687 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.817696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.817703 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e95d8336--562c--5e60--938c--e1db43f5f553-osd--block--e95d8336--562c--5e60--938c--e1db43f5f553', 'dm-uuid-LVM-ztfRVe47Oaz8Dx4feBZw1IAdMSfcHeyflLsgo48Fz0kcNSIrp8VYsCm7tSHUqDEd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.817735 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--590958f1--5006--5da8--896c--bdb08f0ac33f-osd--block--590958f1--5006--5da8--896c--bdb08f0ac33f', 'dm-uuid-LVM-GDcxOYRYMTfbdE6bm9RUedT2ja1WXcothVu0Q3hYuGWfxKTaMQ5s9URketbQftD2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.817768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.817775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.817808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.817814 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.817928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.817953 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c56389c1--f3b1--5ba6--b160--f425a16b3e47-osd--block--c56389c1--f3b1--5ba6--b160--f425a16b3e47', 'dm-uuid-LVM-lgVd3TGKAanyx1UuubDE8F4fOcWVj8DjuQV0cGgI4D2C5F0zBzfD0ig57Sb9wsbD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.817974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c16889b1-7b46-4e25-af69-310fb50c7b7e', 'scsi-SQEMU_QEMU_HARDDISK_c16889b1-7b46-4e25-af69-310fb50c7b7e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c16889b1-7b46-4e25-af69-310fb50c7b7e-part1', 'scsi-SQEMU_QEMU_HARDDISK_c16889b1-7b46-4e25-af69-310fb50c7b7e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c16889b1-7b46-4e25-af69-310fb50c7b7e-part14', 'scsi-SQEMU_QEMU_HARDDISK_c16889b1-7b46-4e25-af69-310fb50c7b7e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c16889b1-7b46-4e25-af69-310fb50c7b7e-part15', 'scsi-SQEMU_QEMU_HARDDISK_c16889b1-7b46-4e25-af69-310fb50c7b7e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c16889b1-7b46-4e25-af69-310fb50c7b7e-part16', 'scsi-SQEMU_QEMU_HARDDISK_c16889b1-7b46-4e25-af69-310fb50c7b7e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:59:49.817987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-03-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:59:49.818004 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.818126 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.818149 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.818160 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.818170 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.818203 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.818216 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.818222 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.818228 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.818242 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.818256 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.818267 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.818309 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.818318 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.818324 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.818330 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.821136 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.821229 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.821260 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847', 'scsi-SQEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847-part1', 'scsi-SQEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847-part14', 'scsi-SQEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847-part15', 'scsi-SQEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847-part16', 'scsi-SQEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:59:49.821303 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e95d8336--562c--5e60--938c--e1db43f5f553-osd--block--e95d8336--562c--5e60--938c--e1db43f5f553'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-u1mKP3-MJVB-fCwd-HeH7-ziOJ-ldBN-jXUfdI', 'scsi-0QEMU_QEMU_HARDDISK_af02e055-7e15-40a4-be69-d990d822f0ba', 'scsi-SQEMU_QEMU_HARDDISK_af02e055-7e15-40a4-be69-d990d822f0ba'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:59:49.821335 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c56389c1--f3b1--5ba6--b160--f425a16b3e47-osd--block--c56389c1--f3b1--5ba6--b160--f425a16b3e47'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-mbVuqY-9dCU-ISmZ-mZSm-7ebn-T3LB-YnmwYS', 'scsi-0QEMU_QEMU_HARDDISK_9db61a68-6a19-4ffe-9dc6-6109c8ad90ec', 'scsi-SQEMU_QEMU_HARDDISK_9db61a68-6a19-4ffe-9dc6-6109c8ad90ec'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:59:49.821351 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e6a3ca4-1946-4dac-9dc1-38bfb1214560', 'scsi-SQEMU_QEMU_HARDDISK_5e6a3ca4-1946-4dac-9dc1-38bfb1214560'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:59:49.821374 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.821395 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d', 'scsi-SQEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d-part1', 'scsi-SQEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d-part14', 'scsi-SQEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d-part15', 'scsi-SQEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d-part16', 'scsi-SQEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:59:49.821427 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-03-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:59:49.821452 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--0b4a24c5--7164--5e55--92cc--433a48be10d0-osd--block--0b4a24c5--7164--5e55--92cc--433a48be10d0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7ZLXT4-E7kf-zLjW-diLI-wHLN-Z5Od-qwtJ62', 'scsi-0QEMU_QEMU_HARDDISK_a6833780-5d8c-49cb-baf4-596d7658d284', 'scsi-SQEMU_QEMU_HARDDISK_a6833780-5d8c-49cb-baf4-596d7658d284'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:59:49.821476 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.821490 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--07cae8b8--d309--58e5--9f3f--3806cd3fe573-osd--block--07cae8b8--d309--58e5--9f3f--3806cd3fe573'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-22UVB5-Gz8Y-u89a-DzGO-vLep-gcHN-21CHr2', 'scsi-0QEMU_QEMU_HARDDISK_d782a267-8601-4e70-9eb9-845bf96c3393', 'scsi-SQEMU_QEMU_HARDDISK_d782a267-8601-4e70-9eb9-845bf96c3393'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:59:49.821504 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e401ede7-34f1-42e1-9654-8299af9dca9f', 'scsi-SQEMU_QEMU_HARDDISK_e401ede7-34f1-42e1-9654-8299af9dca9f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:59:49.821523 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.821537 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-03-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:59:49.821566 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.821581 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.821596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.821619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.821642 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.821658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.821671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.821684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.821704 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07', 'scsi-SQEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07-part1', 'scsi-SQEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07-part14', 'scsi-SQEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07-part15', 'scsi-SQEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07-part16', 'scsi-SQEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:59:49.821761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.821779 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--9c74837a--43e3--5ea9--9fe0--5cec11260b17-osd--block--9c74837a--43e3--5ea9--9fe0--5cec11260b17'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-D6Q0C3-FUqs-T6yd-w7Jq-twLV-onDI-LnXz1U', 'scsi-0QEMU_QEMU_HARDDISK_bc061b31-9341-4fe1-bc4e-7c107d37f2f9', 'scsi-SQEMU_QEMU_HARDDISK_bc061b31-9341-4fe1-bc4e-7c107d37f2f9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:59:49.821803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.821818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.821833 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--590958f1--5006--5da8--896c--bdb08f0ac33f-osd--block--590958f1--5006--5da8--896c--bdb08f0ac33f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5GWMwc-VjMm-BxBU-2FIP-P70X-LgzN-b8AaYw', 'scsi-0QEMU_QEMU_HARDDISK_32378689-09a5-476b-b0b0-ef0e7774d8c3', 'scsi-SQEMU_QEMU_HARDDISK_32378689-09a5-476b-b0b0-ef0e7774d8c3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:59:49.821853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.821879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80fcb8a4-b4d4-4fb3-9956-dfeaf07bcdcb', 'scsi-SQEMU_QEMU_HARDDISK_80fcb8a4-b4d4-4fb3-9956-dfeaf07bcdcb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80fcb8a4-b4d4-4fb3-9956-dfeaf07bcdcb-part1', 'scsi-SQEMU_QEMU_HARDDISK_80fcb8a4-b4d4-4fb3-9956-dfeaf07bcdcb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80fcb8a4-b4d4-4fb3-9956-dfeaf07bcdcb-part14', 'scsi-SQEMU_QEMU_HARDDISK_80fcb8a4-b4d4-4fb3-9956-dfeaf07bcdcb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80fcb8a4-b4d4-4fb3-9956-dfeaf07bcdcb-part15', 'scsi-SQEMU_QEMU_HARDDISK_80fcb8a4-b4d4-4fb3-9956-dfeaf07bcdcb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80fcb8a4-b4d4-4fb3-9956-dfeaf07bcdcb-part16', 'scsi-SQEMU_QEMU_HARDDISK_80fcb8a4-b4d4-4fb3-9956-dfeaf07bcdcb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:59:49.821908 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96371732-37bf-4fbc-835d-bb1aff74906c', 'scsi-SQEMU_QEMU_HARDDISK_96371732-37bf-4fbc-835d-bb1aff74906c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:59:49.821924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.821938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-02-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:59:49.821952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.821966 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-03-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:59:49.821979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.822001 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.822074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.822094 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.822110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.822125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.822165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:49.822205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_943da615-a78a-4b04-b113-1769e9052e23', 'scsi-SQEMU_QEMU_HARDDISK_943da615-a78a-4b04-b113-1769e9052e23'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_943da615-a78a-4b04-b113-1769e9052e23-part1', 'scsi-SQEMU_QEMU_HARDDISK_943da615-a78a-4b04-b113-1769e9052e23-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_943da615-a78a-4b04-b113-1769e9052e23-part14', 'scsi-SQEMU_QEMU_HARDDISK_943da615-a78a-4b04-b113-1769e9052e23-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_943da615-a78a-4b04-b113-1769e9052e23-part15', 'scsi-SQEMU_QEMU_HARDDISK_943da615-a78a-4b04-b113-1769e9052e23-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_943da615-a78a-4b04-b113-1769e9052e23-part16', 'scsi-SQEMU_QEMU_HARDDISK_943da615-a78a-4b04-b113-1769e9052e23-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:59:49.822240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-03-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:59:49.822254 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.822268 | orchestrator | 2026-03-09 00:59:49.822282 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-09 00:59:49.822296 | orchestrator | Monday 09 March 2026 00:48:09 +0000 (0:00:02.953) 0:00:47.367 ********** 2026-03-09 00:59:49.822325 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0b4a24c5--7164--5e55--92cc--433a48be10d0-osd--block--0b4a24c5--7164--5e55--92cc--433a48be10d0', 'dm-uuid-LVM-xoYiAr1LbGAgQx9YTSY4h87WEEAMBYG6KvCGKgRKiE7cyM04uk8bDW8y2n0svaKJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.822342 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--07cae8b8--d309--58e5--9f3f--3806cd3fe573-osd--block--07cae8b8--d309--58e5--9f3f--3806cd3fe573', 'dm-uuid-LVM-gl3VxdhyGcL39CYSAZ2UylTo0uqBhzMRbQXrveI7l53qqf8ztRDRHEHmQd5yahj6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.822362 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.822377 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.822391 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.822437 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.822454 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.822470 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.822484 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.822503 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.822527 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d', 'scsi-SQEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d-part1', 'scsi-SQEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d-part14', 'scsi-SQEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d-part15', 'scsi-SQEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d-part16', 'scsi-SQEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.822551 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--0b4a24c5--7164--5e55--92cc--433a48be10d0-osd--block--0b4a24c5--7164--5e55--92cc--433a48be10d0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7ZLXT4-E7kf-zLjW-diLI-wHLN-Z5Od-qwtJ62', 'scsi-0QEMU_QEMU_HARDDISK_a6833780-5d8c-49cb-baf4-596d7658d284', 'scsi-SQEMU_QEMU_HARDDISK_a6833780-5d8c-49cb-baf4-596d7658d284'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.822570 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--07cae8b8--d309--58e5--9f3f--3806cd3fe573-osd--block--07cae8b8--d309--58e5--9f3f--3806cd3fe573'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-22UVB5-Gz8Y-u89a-DzGO-vLep-gcHN-21CHr2', 'scsi-0QEMU_QEMU_HARDDISK_d782a267-8601-4e70-9eb9-845bf96c3393', 'scsi-SQEMU_QEMU_HARDDISK_d782a267-8601-4e70-9eb9-845bf96c3393'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.822585 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9c74837a--43e3--5ea9--9fe0--5cec11260b17-osd--block--9c74837a--43e3--5ea9--9fe0--5cec11260b17', 'dm-uuid-LVM-r6O3uel0WqqZv6vhGYFFKRbvfWkcwOjX1gmhQS9oeLec7ivOjKlRCcgI2KpJCYRg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.822612 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e401ede7-34f1-42e1-9654-8299af9dca9f', 'scsi-SQEMU_QEMU_HARDDISK_e401ede7-34f1-42e1-9654-8299af9dca9f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.822627 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--590958f1--5006--5da8--896c--bdb08f0ac33f-osd--block--590958f1--5006--5da8--896c--bdb08f0ac33f', 'dm-uuid-LVM-GDcxOYRYMTfbdE6bm9RUedT2ja1WXcothVu0Q3hYuGWfxKTaMQ5s9URketbQftD2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.822656 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-03-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.822675 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.822690 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.822727 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.822743 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.822765 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.822779 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.822793 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e95d8336--562c--5e60--938c--e1db43f5f553-osd--block--e95d8336--562c--5e60--938c--e1db43f5f553', 'dm-uuid-LVM-ztfRVe47Oaz8Dx4feBZw1IAdMSfcHeyflLsgo48Fz0kcNSIrp8VYsCm7tSHUqDEd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.822807 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c56389c1--f3b1--5ba6--b160--f425a16b3e47-osd--block--c56389c1--f3b1--5ba6--b160--f425a16b3e47', 'dm-uuid-LVM-lgVd3TGKAanyx1UuubDE8F4fOcWVj8DjuQV0cGgI4D2C5F0zBzfD0ig57Sb9wsbD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.822840 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.822864 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.822885 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.822900 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.822915 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.822929 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.822949 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.822970 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.822986 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.823008 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.823024 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.823039 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.823073 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847', 'scsi-SQEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847-part1', 'scsi-SQEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847-part14', 'scsi-SQEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847-part15', 'scsi-SQEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847-part16', 'scsi-SQEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.823106 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.823122 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.823137 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e95d8336--562c--5e60--938c--e1db43f5f553-osd--block--e95d8336--562c--5e60--938c--e1db43f5f553'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-u1mKP3-MJVB-fCwd-HeH7-ziOJ-ldBN-jXUfdI', 'scsi-0QEMU_QEMU_HARDDISK_af02e055-7e15-40a4-be69-d990d822f0ba', 'scsi-SQEMU_QEMU_HARDDISK_af02e055-7e15-40a4-be69-d990d822f0ba'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.823156 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.823178 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.823203 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07', 'scsi-SQEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07-part1', 'scsi-SQEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07-part14', 'scsi-SQEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07-part15', 'scsi-SQEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07-part16', 'scsi-SQEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.823219 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.823243 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c56389c1--f3b1--5ba6--b160--f425a16b3e47-osd--block--c56389c1--f3b1--5ba6--b160--f425a16b3e47'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-mbVuqY-9dCU-ISmZ-mZSm-7ebn-T3LB-YnmwYS', 'scsi-0QEMU_QEMU_HARDDISK_9db61a68-6a19-4ffe-9dc6-6109c8ad90ec', 'scsi-SQEMU_QEMU_HARDDISK_9db61a68-6a19-4ffe-9dc6-6109c8ad90ec'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.823265 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.823468 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.823494 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e6a3ca4-1946-4dac-9dc1-38bfb1214560', 'scsi-SQEMU_QEMU_HARDDISK_5e6a3ca4-1946-4dac-9dc1-38bfb1214560'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.823510 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.823549 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c16889b1-7b46-4e25-af69-310fb50c7b7e', 'scsi-SQEMU_QEMU_HARDDISK_c16889b1-7b46-4e25-af69-310fb50c7b7e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c16889b1-7b46-4e25-af69-310fb50c7b7e-part1', 'scsi-SQEMU_QEMU_HARDDISK_c16889b1-7b46-4e25-af69-310fb50c7b7e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c16889b1-7b46-4e25-af69-310fb50c7b7e-part14', 'scsi-SQEMU_QEMU_HARDDISK_c16889b1-7b46-4e25-af69-310fb50c7b7e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c16889b1-7b46-4e25-af69-310fb50c7b7e-part15', 'scsi-SQEMU_QEMU_HARDDISK_c16889b1-7b46-4e25-af69-310fb50c7b7e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c16889b1-7b46-4e25-af69-310fb50c7b7e-part16', 'scsi-SQEMU_QEMU_HARDDISK_c16889b1-7b46-4e25-af69-310fb50c7b7e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.823586 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.823603 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-03-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.823618 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-03-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.823645 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.823674 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.823691 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.823703 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.823736 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.823758 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.823773 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.823787 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.823802 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.823843 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.823871 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80fcb8a4-b4d4-4fb3-9956-dfeaf07bcdcb', 'scsi-SQEMU_QEMU_HARDDISK_80fcb8a4-b4d4-4fb3-9956-dfeaf07bcdcb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80fcb8a4-b4d4-4fb3-9956-dfeaf07bcdcb-part1', 'scsi-SQEMU_QEMU_HARDDISK_80fcb8a4-b4d4-4fb3-9956-dfeaf07bcdcb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80fcb8a4-b4d4-4fb3-9956-dfeaf07bcdcb-part14', 'scsi-SQEMU_QEMU_HARDDISK_80fcb8a4-b4d4-4fb3-9956-dfeaf07bcdcb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80fcb8a4-b4d4-4fb3-9956-dfeaf07bcdcb-part15', 'scsi-SQEMU_QEMU_HARDDISK_80fcb8a4-b4d4-4fb3-9956-dfeaf07bcdcb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80fcb8a4-b4d4-4fb3-9956-dfeaf07bcdcb-part16', 'scsi-SQEMU_QEMU_HARDDISK_80fcb8a4-b4d4-4fb3-9956-dfeaf07bcdcb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.823901 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-02-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.823923 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.823935 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.823953 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--9c74837a--43e3--5ea9--9fe0--5cec11260b17-osd--block--9c74837a--43e3--5ea9--9fe0--5cec11260b17'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-D6Q0C3-FUqs-T6yd-w7Jq-twLV-onDI-LnXz1U', 'scsi-0QEMU_QEMU_HARDDISK_bc061b31-9341-4fe1-bc4e-7c107d37f2f9', 'scsi-SQEMU_QEMU_HARDDISK_bc061b31-9341-4fe1-bc4e-7c107d37f2f9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.823966 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.824041 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--590958f1--5006--5da8--896c--bdb08f0ac33f-osd--block--590958f1--5006--5da8--896c--bdb08f0ac33f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5GWMwc-VjMm-BxBU-2FIP-P70X-LgzN-b8AaYw', 'scsi-0QEMU_QEMU_HARDDISK_32378689-09a5-476b-b0b0-ef0e7774d8c3', 'scsi-SQEMU_QEMU_HARDDISK_32378689-09a5-476b-b0b0-ef0e7774d8c3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.824059 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96371732-37bf-4fbc-835d-bb1aff74906c', 'scsi-SQEMU_QEMU_HARDDISK_96371732-37bf-4fbc-835d-bb1aff74906c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.824081 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-03-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.824100 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.824114 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.824128 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.824150 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.824163 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.824185 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_943da615-a78a-4b04-b113-1769e9052e23', 'scsi-SQEMU_QEMU_HARDDISK_943da615-a78a-4b04-b113-1769e9052e23'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_943da615-a78a-4b04-b113-1769e9052e23-part1', 'scsi-SQEMU_QEMU_HARDDISK_943da615-a78a-4b04-b113-1769e9052e23-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_943da615-a78a-4b04-b113-1769e9052e23-part14', 'scsi-SQEMU_QEMU_HARDDISK_943da615-a78a-4b04-b113-1769e9052e23-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_943da615-a78a-4b04-b113-1769e9052e23-part15', 'scsi-SQEMU_QEMU_HARDDISK_943da615-a78a-4b04-b113-1769e9052e23-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_943da615-a78a-4b04-b113-1769e9052e23-part16', 'scsi-SQEMU_QEMU_HARDDISK_943da615-a78a-4b04-b113-1769e9052e23-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.824208 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-03-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:49.824222 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.824235 | orchestrator | 2026-03-09 00:59:49.824270 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-09 00:59:49.824284 | orchestrator | Monday 09 March 2026 00:48:11 +0000 (0:00:02.225) 0:00:49.593 ********** 2026-03-09 00:59:49.824298 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.824312 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.824324 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.824336 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.824348 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.824372 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.824385 | orchestrator | 2026-03-09 00:59:49.824397 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-09 00:59:49.824409 | orchestrator | Monday 09 March 2026 00:48:15 +0000 (0:00:03.975) 0:00:53.568 ********** 2026-03-09 00:59:49.824421 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.824433 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.824445 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.824457 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.824469 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.824480 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.824500 | orchestrator | 2026-03-09 00:59:49.824511 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-09 00:59:49.824524 | orchestrator | Monday 09 March 2026 00:48:17 +0000 (0:00:01.472) 0:00:55.041 ********** 2026-03-09 00:59:49.824536 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.824549 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.824562 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.824574 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.824586 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.824598 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.824610 | orchestrator | 2026-03-09 00:59:49.824622 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-09 00:59:49.824635 | orchestrator | Monday 09 March 2026 00:48:18 +0000 (0:00:01.541) 0:00:56.583 ********** 2026-03-09 00:59:49.824646 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.824659 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.824671 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.824683 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.824697 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.824710 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.824744 | orchestrator | 2026-03-09 00:59:49.824770 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-09 00:59:49.824784 | orchestrator | Monday 09 March 2026 00:48:20 +0000 (0:00:01.606) 0:00:58.189 ********** 2026-03-09 00:59:49.824797 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.824810 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.824824 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.824832 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.824840 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.824848 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.824855 | orchestrator | 2026-03-09 00:59:49.824863 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-09 00:59:49.824871 | orchestrator | Monday 09 March 2026 00:48:22 +0000 (0:00:02.276) 0:01:00.466 ********** 2026-03-09 00:59:49.824879 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.824886 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.824894 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.824902 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.824909 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.824917 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.824925 | orchestrator | 2026-03-09 00:59:49.824933 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-09 00:59:49.824940 | orchestrator | Monday 09 March 2026 00:48:23 +0000 (0:00:01.080) 0:01:01.547 ********** 2026-03-09 00:59:49.824948 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-09 00:59:49.824956 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-09 00:59:49.824973 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-09 00:59:49.824981 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-09 00:59:49.824989 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-09 00:59:49.824997 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-09 00:59:49.825004 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-09 00:59:49.825021 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-09 00:59:49.825030 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-09 00:59:49.825038 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-09 00:59:49.825045 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-09 00:59:49.825053 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-09 00:59:49.825060 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-09 00:59:49.825068 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-09 00:59:49.825076 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-09 00:59:49.825109 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-09 00:59:49.825118 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-09 00:59:49.825126 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-09 00:59:49.825133 | orchestrator | 2026-03-09 00:59:49.825151 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-09 00:59:49.825159 | orchestrator | Monday 09 March 2026 00:48:28 +0000 (0:00:05.172) 0:01:06.720 ********** 2026-03-09 00:59:49.825167 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-09 00:59:49.825175 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-09 00:59:49.825183 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-09 00:59:49.825191 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.825199 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-09 00:59:49.825207 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-09 00:59:49.825215 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-09 00:59:49.825222 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.825230 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-09 00:59:49.825246 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-09 00:59:49.825254 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-09 00:59:49.825270 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.825277 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-09 00:59:49.825285 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-09 00:59:49.825293 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-09 00:59:49.825301 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.825308 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-09 00:59:49.825316 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-09 00:59:49.825323 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-09 00:59:49.825330 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.825336 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-09 00:59:49.825343 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-09 00:59:49.825349 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-09 00:59:49.825363 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.825370 | orchestrator | 2026-03-09 00:59:49.825376 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-09 00:59:49.825383 | orchestrator | Monday 09 March 2026 00:48:30 +0000 (0:00:01.395) 0:01:08.115 ********** 2026-03-09 00:59:49.825390 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.825396 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.825403 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.825410 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:59:49.825417 | orchestrator | 2026-03-09 00:59:49.825424 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-09 00:59:49.825431 | orchestrator | Monday 09 March 2026 00:48:31 +0000 (0:00:01.352) 0:01:09.467 ********** 2026-03-09 00:59:49.825438 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.825445 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.825451 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.825458 | orchestrator | 2026-03-09 00:59:49.825464 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-09 00:59:49.825471 | orchestrator | Monday 09 March 2026 00:48:32 +0000 (0:00:00.444) 0:01:09.912 ********** 2026-03-09 00:59:49.825478 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.825484 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.825495 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.825502 | orchestrator | 2026-03-09 00:59:49.825509 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-09 00:59:49.825515 | orchestrator | Monday 09 March 2026 00:48:32 +0000 (0:00:00.525) 0:01:10.438 ********** 2026-03-09 00:59:49.825522 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.825528 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.825535 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.825541 | orchestrator | 2026-03-09 00:59:49.825548 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-09 00:59:49.825555 | orchestrator | Monday 09 March 2026 00:48:33 +0000 (0:00:00.778) 0:01:11.216 ********** 2026-03-09 00:59:49.825561 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.825568 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.825575 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.825581 | orchestrator | 2026-03-09 00:59:49.825588 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-09 00:59:49.825598 | orchestrator | Monday 09 March 2026 00:48:34 +0000 (0:00:00.861) 0:01:12.078 ********** 2026-03-09 00:59:49.825605 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 00:59:49.825612 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 00:59:49.825618 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 00:59:49.825625 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.825632 | orchestrator | 2026-03-09 00:59:49.825638 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-09 00:59:49.825645 | orchestrator | Monday 09 March 2026 00:48:34 +0000 (0:00:00.415) 0:01:12.493 ********** 2026-03-09 00:59:49.825651 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 00:59:49.825658 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 00:59:49.825664 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 00:59:49.825671 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.825677 | orchestrator | 2026-03-09 00:59:49.825684 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-09 00:59:49.825691 | orchestrator | Monday 09 March 2026 00:48:35 +0000 (0:00:00.439) 0:01:12.933 ********** 2026-03-09 00:59:49.825697 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 00:59:49.825704 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 00:59:49.825756 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 00:59:49.825765 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.825772 | orchestrator | 2026-03-09 00:59:49.825778 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-09 00:59:49.825785 | orchestrator | Monday 09 March 2026 00:48:35 +0000 (0:00:00.527) 0:01:13.460 ********** 2026-03-09 00:59:49.825792 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.825798 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.825805 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.825811 | orchestrator | 2026-03-09 00:59:49.825818 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-09 00:59:49.825825 | orchestrator | Monday 09 March 2026 00:48:36 +0000 (0:00:00.500) 0:01:13.961 ********** 2026-03-09 00:59:49.825831 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-09 00:59:49.825838 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-09 00:59:49.825851 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-09 00:59:49.825857 | orchestrator | 2026-03-09 00:59:49.825864 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-09 00:59:49.825871 | orchestrator | Monday 09 March 2026 00:48:37 +0000 (0:00:01.464) 0:01:15.425 ********** 2026-03-09 00:59:49.825877 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-09 00:59:49.825884 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-09 00:59:49.825896 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-09 00:59:49.825902 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-09 00:59:49.825909 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-09 00:59:49.825915 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-09 00:59:49.825922 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-09 00:59:49.825928 | orchestrator | 2026-03-09 00:59:49.825935 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-09 00:59:49.825941 | orchestrator | Monday 09 March 2026 00:48:38 +0000 (0:00:00.886) 0:01:16.311 ********** 2026-03-09 00:59:49.825949 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-09 00:59:49.825955 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-09 00:59:49.825962 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-09 00:59:49.825977 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-09 00:59:49.825984 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-09 00:59:49.825990 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-09 00:59:49.825997 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-09 00:59:49.826003 | orchestrator | 2026-03-09 00:59:49.826010 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-09 00:59:49.826062 | orchestrator | Monday 09 March 2026 00:48:41 +0000 (0:00:02.571) 0:01:18.883 ********** 2026-03-09 00:59:49.826069 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:49.826076 | orchestrator | 2026-03-09 00:59:49.826083 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-09 00:59:49.826089 | orchestrator | Monday 09 March 2026 00:48:42 +0000 (0:00:01.394) 0:01:20.277 ********** 2026-03-09 00:59:49.826096 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:49.826103 | orchestrator | 2026-03-09 00:59:49.826109 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-09 00:59:49.826116 | orchestrator | Monday 09 March 2026 00:48:44 +0000 (0:00:01.616) 0:01:21.894 ********** 2026-03-09 00:59:49.826123 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.826129 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.826136 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.826147 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.826155 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.826169 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.826176 | orchestrator | 2026-03-09 00:59:49.826182 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-09 00:59:49.826189 | orchestrator | Monday 09 March 2026 00:48:45 +0000 (0:00:01.761) 0:01:23.656 ********** 2026-03-09 00:59:49.826195 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.826202 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.826209 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.826216 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.826222 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.826229 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.826236 | orchestrator | 2026-03-09 00:59:49.826242 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-09 00:59:49.826249 | orchestrator | Monday 09 March 2026 00:48:47 +0000 (0:00:01.352) 0:01:25.008 ********** 2026-03-09 00:59:49.826261 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.826268 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.826274 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.826281 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.826287 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.826294 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.826300 | orchestrator | 2026-03-09 00:59:49.826307 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-09 00:59:49.826314 | orchestrator | Monday 09 March 2026 00:48:48 +0000 (0:00:01.105) 0:01:26.113 ********** 2026-03-09 00:59:49.826320 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.826327 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.826333 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.826340 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.826346 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.826353 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.826360 | orchestrator | 2026-03-09 00:59:49.826366 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-09 00:59:49.826373 | orchestrator | Monday 09 March 2026 00:48:49 +0000 (0:00:00.752) 0:01:26.866 ********** 2026-03-09 00:59:49.826379 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.826386 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.826393 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.826407 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.826414 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.826432 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.826439 | orchestrator | 2026-03-09 00:59:49.826445 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-09 00:59:49.826452 | orchestrator | Monday 09 March 2026 00:48:50 +0000 (0:00:01.178) 0:01:28.044 ********** 2026-03-09 00:59:49.826459 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.826465 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.826472 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.826479 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.826486 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.826492 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.826499 | orchestrator | 2026-03-09 00:59:49.826506 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-09 00:59:49.826512 | orchestrator | Monday 09 March 2026 00:48:51 +0000 (0:00:00.845) 0:01:28.890 ********** 2026-03-09 00:59:49.826526 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.826533 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.826539 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.826546 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.826553 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.826559 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.826566 | orchestrator | 2026-03-09 00:59:49.826572 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-09 00:59:49.826579 | orchestrator | Monday 09 March 2026 00:48:51 +0000 (0:00:00.761) 0:01:29.651 ********** 2026-03-09 00:59:49.826586 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.826592 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.826599 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.826605 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.826612 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.826619 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.826626 | orchestrator | 2026-03-09 00:59:49.826632 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-09 00:59:49.826639 | orchestrator | Monday 09 March 2026 00:48:53 +0000 (0:00:01.419) 0:01:31.071 ********** 2026-03-09 00:59:49.826646 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.826653 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.826659 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.826666 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.826679 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.826686 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.826692 | orchestrator | 2026-03-09 00:59:49.826699 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-09 00:59:49.826706 | orchestrator | Monday 09 March 2026 00:48:54 +0000 (0:00:01.217) 0:01:32.289 ********** 2026-03-09 00:59:49.826723 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.826730 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.826737 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.826743 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.826750 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.826757 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.826763 | orchestrator | 2026-03-09 00:59:49.826770 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-09 00:59:49.826777 | orchestrator | Monday 09 March 2026 00:48:55 +0000 (0:00:00.590) 0:01:32.880 ********** 2026-03-09 00:59:49.826783 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.826790 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.826797 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.826803 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.826810 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.826816 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.826823 | orchestrator | 2026-03-09 00:59:49.826830 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-09 00:59:49.826836 | orchestrator | Monday 09 March 2026 00:48:56 +0000 (0:00:00.976) 0:01:33.857 ********** 2026-03-09 00:59:49.826843 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.826850 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.826856 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.826866 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.826873 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.826880 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.826887 | orchestrator | 2026-03-09 00:59:49.826893 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-09 00:59:49.826900 | orchestrator | Monday 09 March 2026 00:48:57 +0000 (0:00:00.945) 0:01:34.802 ********** 2026-03-09 00:59:49.826906 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.826913 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.826920 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.826926 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.826933 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.826940 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.826946 | orchestrator | 2026-03-09 00:59:49.826953 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-09 00:59:49.826959 | orchestrator | Monday 09 March 2026 00:48:59 +0000 (0:00:02.169) 0:01:36.972 ********** 2026-03-09 00:59:49.826966 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.826973 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.826979 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.826986 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.826993 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.826999 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.827006 | orchestrator | 2026-03-09 00:59:49.827012 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-09 00:59:49.827019 | orchestrator | Monday 09 March 2026 00:49:00 +0000 (0:00:01.323) 0:01:38.295 ********** 2026-03-09 00:59:49.827026 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.827033 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.827039 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.827046 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.827052 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.827059 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.827065 | orchestrator | 2026-03-09 00:59:49.827072 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-09 00:59:49.827083 | orchestrator | Monday 09 March 2026 00:49:02 +0000 (0:00:01.494) 0:01:39.789 ********** 2026-03-09 00:59:49.827090 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.827097 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.827111 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.827117 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.827129 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.827136 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.827142 | orchestrator | 2026-03-09 00:59:49.827149 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-09 00:59:49.827156 | orchestrator | Monday 09 March 2026 00:49:03 +0000 (0:00:01.003) 0:01:40.793 ********** 2026-03-09 00:59:49.827163 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.827170 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.827176 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.827183 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.827190 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.827197 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.827203 | orchestrator | 2026-03-09 00:59:49.827210 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-09 00:59:49.827217 | orchestrator | Monday 09 March 2026 00:49:04 +0000 (0:00:01.353) 0:01:42.147 ********** 2026-03-09 00:59:49.827223 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.827230 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.827236 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.827243 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.827250 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.827257 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.827263 | orchestrator | 2026-03-09 00:59:49.827270 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-09 00:59:49.827277 | orchestrator | Monday 09 March 2026 00:49:05 +0000 (0:00:00.986) 0:01:43.134 ********** 2026-03-09 00:59:49.827283 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.827290 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.827297 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.827303 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.827310 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.827325 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.827332 | orchestrator | 2026-03-09 00:59:49.827339 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-09 00:59:49.827346 | orchestrator | Monday 09 March 2026 00:49:07 +0000 (0:00:01.687) 0:01:44.822 ********** 2026-03-09 00:59:49.827352 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:59:49.827359 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:59:49.827366 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:59:49.827372 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:49.827379 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:49.827386 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:49.827392 | orchestrator | 2026-03-09 00:59:49.827399 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-09 00:59:49.827406 | orchestrator | Monday 09 March 2026 00:49:09 +0000 (0:00:02.274) 0:01:47.096 ********** 2026-03-09 00:59:49.827413 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:59:49.827419 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:59:49.827426 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:49.827433 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:49.827439 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:49.827446 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:59:49.827453 | orchestrator | 2026-03-09 00:59:49.827459 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-09 00:59:49.827466 | orchestrator | Monday 09 March 2026 00:49:12 +0000 (0:00:03.083) 0:01:50.180 ********** 2026-03-09 00:59:49.827473 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:49.827485 | orchestrator | 2026-03-09 00:59:49.827493 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-09 00:59:49.827500 | orchestrator | Monday 09 March 2026 00:49:13 +0000 (0:00:01.250) 0:01:51.430 ********** 2026-03-09 00:59:49.827507 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.827513 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.827531 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.827538 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.827545 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.827552 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.827558 | orchestrator | 2026-03-09 00:59:49.827565 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-09 00:59:49.827572 | orchestrator | Monday 09 March 2026 00:49:14 +0000 (0:00:00.701) 0:01:52.131 ********** 2026-03-09 00:59:49.827578 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.827585 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.827592 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.827598 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.827605 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.827611 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.827618 | orchestrator | 2026-03-09 00:59:49.827625 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-09 00:59:49.827631 | orchestrator | Monday 09 March 2026 00:49:15 +0000 (0:00:00.983) 0:01:53.115 ********** 2026-03-09 00:59:49.827638 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-09 00:59:49.827644 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-09 00:59:49.827651 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-09 00:59:49.827658 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-09 00:59:49.827664 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-09 00:59:49.827671 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-09 00:59:49.827677 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-09 00:59:49.827684 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-09 00:59:49.827691 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-09 00:59:49.827697 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-09 00:59:49.827708 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-09 00:59:49.827744 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-09 00:59:49.827751 | orchestrator | 2026-03-09 00:59:49.827758 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-09 00:59:49.827764 | orchestrator | Monday 09 March 2026 00:49:17 +0000 (0:00:01.848) 0:01:54.964 ********** 2026-03-09 00:59:49.827771 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:59:49.827778 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:59:49.827784 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:59:49.827791 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:49.827798 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:49.827804 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:49.827811 | orchestrator | 2026-03-09 00:59:49.827825 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-09 00:59:49.827832 | orchestrator | Monday 09 March 2026 00:49:18 +0000 (0:00:01.480) 0:01:56.444 ********** 2026-03-09 00:59:49.827839 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.827846 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.827852 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.827865 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.827872 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.827879 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.827885 | orchestrator | 2026-03-09 00:59:49.827892 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-09 00:59:49.827898 | orchestrator | Monday 09 March 2026 00:49:19 +0000 (0:00:00.681) 0:01:57.126 ********** 2026-03-09 00:59:49.827905 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.827912 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.827918 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.827925 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.827931 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.827938 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.827944 | orchestrator | 2026-03-09 00:59:49.827951 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-09 00:59:49.827957 | orchestrator | Monday 09 March 2026 00:49:20 +0000 (0:00:00.960) 0:01:58.087 ********** 2026-03-09 00:59:49.827964 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.827971 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.827977 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.827984 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.827990 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.827997 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.828003 | orchestrator | 2026-03-09 00:59:49.828010 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-09 00:59:49.828017 | orchestrator | Monday 09 March 2026 00:49:21 +0000 (0:00:00.737) 0:01:58.824 ********** 2026-03-09 00:59:49.828024 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:49.828030 | orchestrator | 2026-03-09 00:59:49.828037 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-09 00:59:49.828044 | orchestrator | Monday 09 March 2026 00:49:22 +0000 (0:00:01.414) 0:02:00.238 ********** 2026-03-09 00:59:49.828050 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.828057 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.828064 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.828070 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.828077 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.828084 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.828090 | orchestrator | 2026-03-09 00:59:49.828097 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-09 00:59:49.828107 | orchestrator | Monday 09 March 2026 00:50:09 +0000 (0:00:47.497) 0:02:47.736 ********** 2026-03-09 00:59:49.828115 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-09 00:59:49.828121 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-09 00:59:49.828128 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-09 00:59:49.828135 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.828141 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-09 00:59:49.828148 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-09 00:59:49.828155 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-09 00:59:49.828161 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.828168 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-09 00:59:49.828175 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-09 00:59:49.828181 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-09 00:59:49.828188 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.828195 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-09 00:59:49.828206 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-09 00:59:49.828213 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-09 00:59:49.828219 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.828226 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-09 00:59:49.828233 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-09 00:59:49.828239 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-09 00:59:49.828246 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.828258 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-09 00:59:49.828265 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-09 00:59:49.828271 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-09 00:59:49.828278 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.828284 | orchestrator | 2026-03-09 00:59:49.828290 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-09 00:59:49.828296 | orchestrator | Monday 09 March 2026 00:50:10 +0000 (0:00:00.805) 0:02:48.541 ********** 2026-03-09 00:59:49.828302 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.828309 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.828315 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.828321 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.828327 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.828333 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.828339 | orchestrator | 2026-03-09 00:59:49.828346 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-09 00:59:49.828352 | orchestrator | Monday 09 March 2026 00:50:11 +0000 (0:00:00.931) 0:02:49.472 ********** 2026-03-09 00:59:49.828359 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.828365 | orchestrator | 2026-03-09 00:59:49.828371 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-09 00:59:49.828377 | orchestrator | Monday 09 March 2026 00:50:11 +0000 (0:00:00.174) 0:02:49.647 ********** 2026-03-09 00:59:49.828383 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.828389 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.828395 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.828401 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.828407 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.828414 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.828420 | orchestrator | 2026-03-09 00:59:49.828426 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-09 00:59:49.828432 | orchestrator | Monday 09 March 2026 00:50:12 +0000 (0:00:00.724) 0:02:50.371 ********** 2026-03-09 00:59:49.828439 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.828445 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.828451 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.828457 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.828463 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.828469 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.828475 | orchestrator | 2026-03-09 00:59:49.828481 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-09 00:59:49.828488 | orchestrator | Monday 09 March 2026 00:50:13 +0000 (0:00:01.110) 0:02:51.482 ********** 2026-03-09 00:59:49.828494 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.828500 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.828506 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.828512 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.828518 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.828525 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.828531 | orchestrator | 2026-03-09 00:59:49.828544 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-09 00:59:49.828551 | orchestrator | Monday 09 March 2026 00:50:14 +0000 (0:00:00.878) 0:02:52.360 ********** 2026-03-09 00:59:49.828557 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.828563 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.828569 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.828575 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.828582 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.828588 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.828594 | orchestrator | 2026-03-09 00:59:49.828600 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-09 00:59:49.828606 | orchestrator | Monday 09 March 2026 00:50:17 +0000 (0:00:02.574) 0:02:54.935 ********** 2026-03-09 00:59:49.828616 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.828623 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.828629 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.828635 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.828641 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.828647 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.828653 | orchestrator | 2026-03-09 00:59:49.828659 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-09 00:59:49.828666 | orchestrator | Monday 09 March 2026 00:50:18 +0000 (0:00:00.841) 0:02:55.776 ********** 2026-03-09 00:59:49.828672 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:49.828679 | orchestrator | 2026-03-09 00:59:49.828685 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-09 00:59:49.828691 | orchestrator | Monday 09 March 2026 00:50:20 +0000 (0:00:02.002) 0:02:57.779 ********** 2026-03-09 00:59:49.828697 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.828703 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.828709 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.828725 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.828731 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.828737 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.828750 | orchestrator | 2026-03-09 00:59:49.828757 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-09 00:59:49.828763 | orchestrator | Monday 09 March 2026 00:50:21 +0000 (0:00:01.379) 0:02:59.159 ********** 2026-03-09 00:59:49.828769 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.828776 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.828782 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.828788 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.828794 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.828801 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.828807 | orchestrator | 2026-03-09 00:59:49.828813 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-09 00:59:49.828820 | orchestrator | Monday 09 March 2026 00:50:22 +0000 (0:00:00.946) 0:03:00.106 ********** 2026-03-09 00:59:49.828826 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.828832 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.828842 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.828848 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.828854 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.828860 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.828867 | orchestrator | 2026-03-09 00:59:49.828873 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-09 00:59:49.828879 | orchestrator | Monday 09 March 2026 00:50:23 +0000 (0:00:01.239) 0:03:01.345 ********** 2026-03-09 00:59:49.828885 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.828891 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.828898 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.828904 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.828914 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.828920 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.828927 | orchestrator | 2026-03-09 00:59:49.828933 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-09 00:59:49.828939 | orchestrator | Monday 09 March 2026 00:50:24 +0000 (0:00:00.877) 0:03:02.223 ********** 2026-03-09 00:59:49.828945 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.828952 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.828958 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.828964 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.828970 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.828976 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.828983 | orchestrator | 2026-03-09 00:59:49.828989 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-09 00:59:49.828995 | orchestrator | Monday 09 March 2026 00:50:25 +0000 (0:00:01.254) 0:03:03.477 ********** 2026-03-09 00:59:49.829001 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.829008 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.829014 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.829020 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.829026 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.829032 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.829038 | orchestrator | 2026-03-09 00:59:49.829044 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-09 00:59:49.829051 | orchestrator | Monday 09 March 2026 00:50:26 +0000 (0:00:00.625) 0:03:04.103 ********** 2026-03-09 00:59:49.829057 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.829063 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.829069 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.829075 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.829081 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.829087 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.829101 | orchestrator | 2026-03-09 00:59:49.829108 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-09 00:59:49.829114 | orchestrator | Monday 09 March 2026 00:50:27 +0000 (0:00:00.788) 0:03:04.891 ********** 2026-03-09 00:59:49.829120 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.829159 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.829175 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.829181 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.829187 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.829193 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.829200 | orchestrator | 2026-03-09 00:59:49.829206 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-09 00:59:49.829212 | orchestrator | Monday 09 March 2026 00:50:27 +0000 (0:00:00.663) 0:03:05.555 ********** 2026-03-09 00:59:49.829218 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.829224 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.829230 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.829237 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.829243 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.829249 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.829255 | orchestrator | 2026-03-09 00:59:49.829261 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-09 00:59:49.829278 | orchestrator | Monday 09 March 2026 00:50:29 +0000 (0:00:01.504) 0:03:07.059 ********** 2026-03-09 00:59:49.829285 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:49.829291 | orchestrator | 2026-03-09 00:59:49.829297 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-09 00:59:49.829303 | orchestrator | Monday 09 March 2026 00:50:30 +0000 (0:00:01.666) 0:03:08.725 ********** 2026-03-09 00:59:49.829314 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-03-09 00:59:49.829320 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-03-09 00:59:49.829326 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-03-09 00:59:49.829332 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-09 00:59:49.829339 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-09 00:59:49.829345 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-03-09 00:59:49.829351 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-03-09 00:59:49.829357 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-09 00:59:49.829363 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-09 00:59:49.829369 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-03-09 00:59:49.829376 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-09 00:59:49.829382 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-09 00:59:49.829388 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-09 00:59:49.829394 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-09 00:59:49.829400 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-09 00:59:49.829407 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-09 00:59:49.829413 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-09 00:59:49.829419 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-09 00:59:49.829429 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-09 00:59:49.829435 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-09 00:59:49.829441 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-09 00:59:49.829447 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-09 00:59:49.829454 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-09 00:59:49.829460 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-09 00:59:49.829466 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-09 00:59:49.829472 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-09 00:59:49.829478 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-09 00:59:49.829484 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-09 00:59:49.829490 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-09 00:59:49.829496 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-09 00:59:49.829502 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-09 00:59:49.829508 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-09 00:59:49.829514 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-09 00:59:49.829520 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-09 00:59:49.829527 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-09 00:59:49.829533 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-09 00:59:49.829539 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-09 00:59:49.829545 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-09 00:59:49.829551 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-09 00:59:49.829558 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-09 00:59:49.829564 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-09 00:59:49.829570 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-09 00:59:49.829576 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-09 00:59:49.829582 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-09 00:59:49.829592 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-09 00:59:49.829598 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-09 00:59:49.829604 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-09 00:59:49.829610 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-09 00:59:49.829617 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-09 00:59:49.829623 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-09 00:59:49.829629 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-09 00:59:49.829635 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-09 00:59:49.829641 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-09 00:59:49.829648 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-09 00:59:49.829654 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-09 00:59:49.829660 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-09 00:59:49.829669 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-09 00:59:49.829675 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-09 00:59:49.829681 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-09 00:59:49.829687 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-09 00:59:49.829693 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-09 00:59:49.829700 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-09 00:59:49.829706 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-09 00:59:49.829740 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-09 00:59:49.829747 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-09 00:59:49.829766 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-09 00:59:49.829772 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-09 00:59:49.829779 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-09 00:59:49.829785 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-09 00:59:49.829791 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-09 00:59:49.829797 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-09 00:59:49.829803 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-09 00:59:49.829809 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-09 00:59:49.829815 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-09 00:59:49.829821 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-09 00:59:49.829827 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-09 00:59:49.829837 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-09 00:59:49.829844 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-09 00:59:49.829850 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-09 00:59:49.829856 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-03-09 00:59:49.829862 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-09 00:59:49.829869 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-03-09 00:59:49.829875 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-09 00:59:49.829881 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-09 00:59:49.829891 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-03-09 00:59:49.829898 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-03-09 00:59:49.829904 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-03-09 00:59:49.829910 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-03-09 00:59:49.829916 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-09 00:59:49.829923 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-09 00:59:49.829929 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-03-09 00:59:49.829935 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-03-09 00:59:49.829941 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-03-09 00:59:49.829947 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-03-09 00:59:49.829953 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-03-09 00:59:49.829959 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-03-09 00:59:49.829966 | orchestrator | 2026-03-09 00:59:49.829972 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-09 00:59:49.829978 | orchestrator | Monday 09 March 2026 00:50:38 +0000 (0:00:07.745) 0:03:16.471 ********** 2026-03-09 00:59:49.829984 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.829998 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.830004 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.830030 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:59:49.830038 | orchestrator | 2026-03-09 00:59:49.830045 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-09 00:59:49.830051 | orchestrator | Monday 09 March 2026 00:50:39 +0000 (0:00:01.264) 0:03:17.735 ********** 2026-03-09 00:59:49.830057 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-09 00:59:49.830064 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-09 00:59:49.830070 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-09 00:59:49.830076 | orchestrator | 2026-03-09 00:59:49.830082 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-09 00:59:49.830089 | orchestrator | Monday 09 March 2026 00:50:41 +0000 (0:00:01.427) 0:03:19.163 ********** 2026-03-09 00:59:49.830098 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-09 00:59:49.830104 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-09 00:59:49.830111 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-09 00:59:49.830117 | orchestrator | 2026-03-09 00:59:49.830123 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-09 00:59:49.830129 | orchestrator | Monday 09 March 2026 00:50:43 +0000 (0:00:01.840) 0:03:21.003 ********** 2026-03-09 00:59:49.830136 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.830142 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.830149 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.830155 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.830161 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.830167 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.830173 | orchestrator | 2026-03-09 00:59:49.830180 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-09 00:59:49.830229 | orchestrator | Monday 09 March 2026 00:50:44 +0000 (0:00:01.131) 0:03:22.135 ********** 2026-03-09 00:59:49.830236 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.830242 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.830248 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.830254 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.830260 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.830265 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.830271 | orchestrator | 2026-03-09 00:59:49.830276 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-09 00:59:49.830282 | orchestrator | Monday 09 March 2026 00:50:45 +0000 (0:00:01.472) 0:03:23.607 ********** 2026-03-09 00:59:49.830287 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.830292 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.830298 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.830303 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.830316 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.830321 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.830326 | orchestrator | 2026-03-09 00:59:49.830336 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-09 00:59:49.830342 | orchestrator | Monday 09 March 2026 00:50:46 +0000 (0:00:00.761) 0:03:24.369 ********** 2026-03-09 00:59:49.830347 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.830353 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.830358 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.830364 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.830369 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.830374 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.830380 | orchestrator | 2026-03-09 00:59:49.830385 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-09 00:59:49.830391 | orchestrator | Monday 09 March 2026 00:50:47 +0000 (0:00:01.235) 0:03:25.604 ********** 2026-03-09 00:59:49.830396 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.830401 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.830407 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.830412 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.830417 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.830423 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.830428 | orchestrator | 2026-03-09 00:59:49.830433 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-09 00:59:49.830439 | orchestrator | Monday 09 March 2026 00:50:48 +0000 (0:00:00.802) 0:03:26.407 ********** 2026-03-09 00:59:49.830444 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.830449 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.830455 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.830460 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.830465 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.830471 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.830476 | orchestrator | 2026-03-09 00:59:49.830482 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-09 00:59:49.830487 | orchestrator | Monday 09 March 2026 00:50:49 +0000 (0:00:01.250) 0:03:27.657 ********** 2026-03-09 00:59:49.830492 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.830498 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.830503 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.830509 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.830514 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.830519 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.830525 | orchestrator | 2026-03-09 00:59:49.830530 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-09 00:59:49.830536 | orchestrator | Monday 09 March 2026 00:50:50 +0000 (0:00:00.677) 0:03:28.334 ********** 2026-03-09 00:59:49.830541 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.830550 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.830556 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.830561 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.830567 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.830572 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.830577 | orchestrator | 2026-03-09 00:59:49.830583 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-09 00:59:49.830588 | orchestrator | Monday 09 March 2026 00:50:51 +0000 (0:00:01.147) 0:03:29.481 ********** 2026-03-09 00:59:49.830593 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.830599 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.830604 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.830610 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.830615 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.830620 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.830626 | orchestrator | 2026-03-09 00:59:49.830631 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-09 00:59:49.830637 | orchestrator | Monday 09 March 2026 00:50:55 +0000 (0:00:03.637) 0:03:33.119 ********** 2026-03-09 00:59:49.830645 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.830650 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.830656 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.830661 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.830667 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.830672 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.830677 | orchestrator | 2026-03-09 00:59:49.830683 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-09 00:59:49.830688 | orchestrator | Monday 09 March 2026 00:50:56 +0000 (0:00:01.181) 0:03:34.300 ********** 2026-03-09 00:59:49.830694 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.830699 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.830704 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.830717 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.830723 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.830729 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.830734 | orchestrator | 2026-03-09 00:59:49.830740 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-09 00:59:49.830745 | orchestrator | Monday 09 March 2026 00:50:57 +0000 (0:00:01.083) 0:03:35.383 ********** 2026-03-09 00:59:49.830750 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.830756 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.830761 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.830767 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.830772 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.830777 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.830783 | orchestrator | 2026-03-09 00:59:49.830788 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-09 00:59:49.830793 | orchestrator | Monday 09 March 2026 00:50:59 +0000 (0:00:01.396) 0:03:36.780 ********** 2026-03-09 00:59:49.830799 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-09 00:59:49.830804 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-09 00:59:49.830810 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-09 00:59:49.830815 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.830824 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.830830 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.830835 | orchestrator | 2026-03-09 00:59:49.830841 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-09 00:59:49.830846 | orchestrator | Monday 09 March 2026 00:50:59 +0000 (0:00:00.870) 0:03:37.651 ********** 2026-03-09 00:59:49.830864 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-03-09 00:59:49.830872 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-03-09 00:59:49.830879 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.830884 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-03-09 00:59:49.830890 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-03-09 00:59:49.830896 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.830902 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-03-09 00:59:49.830907 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-03-09 00:59:49.830913 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.830918 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.830924 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.830929 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.830934 | orchestrator | 2026-03-09 00:59:49.830940 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-09 00:59:49.830945 | orchestrator | Monday 09 March 2026 00:51:01 +0000 (0:00:02.039) 0:03:39.690 ********** 2026-03-09 00:59:49.830953 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.830959 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.830964 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.830970 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.830975 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.830980 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.830986 | orchestrator | 2026-03-09 00:59:49.830991 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-09 00:59:49.830997 | orchestrator | Monday 09 March 2026 00:51:02 +0000 (0:00:00.885) 0:03:40.575 ********** 2026-03-09 00:59:49.831002 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.831007 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.831013 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.831018 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.831023 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.831029 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.831034 | orchestrator | 2026-03-09 00:59:49.831040 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-09 00:59:49.831045 | orchestrator | Monday 09 March 2026 00:51:03 +0000 (0:00:00.781) 0:03:41.357 ********** 2026-03-09 00:59:49.831055 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.831061 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.831066 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.831071 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.831077 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.831082 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.831087 | orchestrator | 2026-03-09 00:59:49.831093 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-09 00:59:49.831098 | orchestrator | Monday 09 March 2026 00:51:04 +0000 (0:00:00.659) 0:03:42.017 ********** 2026-03-09 00:59:49.831103 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.831109 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.831114 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.831120 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.831125 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.831130 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.831136 | orchestrator | 2026-03-09 00:59:49.831141 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-09 00:59:49.831150 | orchestrator | Monday 09 March 2026 00:51:05 +0000 (0:00:00.926) 0:03:42.943 ********** 2026-03-09 00:59:49.831155 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.831161 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.831166 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.831171 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.831177 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.831182 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.831187 | orchestrator | 2026-03-09 00:59:49.831193 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-09 00:59:49.831198 | orchestrator | Monday 09 March 2026 00:51:06 +0000 (0:00:00.966) 0:03:43.910 ********** 2026-03-09 00:59:49.831204 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.831209 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.831215 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.831220 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.831225 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.831231 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.831236 | orchestrator | 2026-03-09 00:59:49.831242 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-09 00:59:49.831247 | orchestrator | Monday 09 March 2026 00:51:07 +0000 (0:00:00.898) 0:03:44.809 ********** 2026-03-09 00:59:49.831252 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 00:59:49.831258 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 00:59:49.831264 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 00:59:49.831270 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.831275 | orchestrator | 2026-03-09 00:59:49.831281 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-09 00:59:49.831286 | orchestrator | Monday 09 March 2026 00:51:07 +0000 (0:00:00.640) 0:03:45.449 ********** 2026-03-09 00:59:49.831291 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 00:59:49.831297 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 00:59:49.831302 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 00:59:49.831307 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.831313 | orchestrator | 2026-03-09 00:59:49.831318 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-09 00:59:49.831324 | orchestrator | Monday 09 March 2026 00:51:08 +0000 (0:00:00.417) 0:03:45.867 ********** 2026-03-09 00:59:49.831329 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 00:59:49.831335 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 00:59:49.831340 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 00:59:49.831353 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.831358 | orchestrator | 2026-03-09 00:59:49.831364 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-09 00:59:49.831369 | orchestrator | Monday 09 March 2026 00:51:08 +0000 (0:00:00.565) 0:03:46.433 ********** 2026-03-09 00:59:49.831375 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.831380 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.831385 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.831391 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.831396 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.831401 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.831407 | orchestrator | 2026-03-09 00:59:49.831419 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-09 00:59:49.831425 | orchestrator | Monday 09 March 2026 00:51:09 +0000 (0:00:00.730) 0:03:47.163 ********** 2026-03-09 00:59:49.831430 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-09 00:59:49.831436 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-09 00:59:49.831441 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-09 00:59:49.831449 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-09 00:59:49.831455 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.831460 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-09 00:59:49.831465 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.831471 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-09 00:59:49.831476 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.831482 | orchestrator | 2026-03-09 00:59:49.831487 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-09 00:59:49.831492 | orchestrator | Monday 09 March 2026 00:51:11 +0000 (0:00:02.522) 0:03:49.686 ********** 2026-03-09 00:59:49.831498 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:59:49.831503 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:59:49.831509 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:59:49.831514 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:49.831519 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:49.831531 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:49.831536 | orchestrator | 2026-03-09 00:59:49.831542 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-09 00:59:49.831548 | orchestrator | Monday 09 March 2026 00:51:16 +0000 (0:00:04.089) 0:03:53.776 ********** 2026-03-09 00:59:49.831553 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:59:49.831558 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:59:49.831564 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:59:49.831569 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:49.831574 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:49.831579 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:49.831585 | orchestrator | 2026-03-09 00:59:49.831590 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-09 00:59:49.831596 | orchestrator | Monday 09 March 2026 00:51:17 +0000 (0:00:01.595) 0:03:55.371 ********** 2026-03-09 00:59:49.831607 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.831613 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.831618 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.831624 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:49.831629 | orchestrator | 2026-03-09 00:59:49.831635 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-09 00:59:49.831643 | orchestrator | Monday 09 March 2026 00:51:18 +0000 (0:00:01.252) 0:03:56.624 ********** 2026-03-09 00:59:49.831649 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.831654 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.831660 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.831665 | orchestrator | 2026-03-09 00:59:49.831671 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-09 00:59:49.831676 | orchestrator | Monday 09 March 2026 00:51:19 +0000 (0:00:00.461) 0:03:57.086 ********** 2026-03-09 00:59:49.831685 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:49.831691 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:49.831696 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:49.831702 | orchestrator | 2026-03-09 00:59:49.831707 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-09 00:59:49.831721 | orchestrator | Monday 09 March 2026 00:51:21 +0000 (0:00:01.959) 0:03:59.045 ********** 2026-03-09 00:59:49.831727 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-09 00:59:49.831732 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-09 00:59:49.831738 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-09 00:59:49.831743 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.831749 | orchestrator | 2026-03-09 00:59:49.831754 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-09 00:59:49.831760 | orchestrator | Monday 09 March 2026 00:51:21 +0000 (0:00:00.702) 0:03:59.747 ********** 2026-03-09 00:59:49.831765 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.831770 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.831776 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.831781 | orchestrator | 2026-03-09 00:59:49.831786 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-09 00:59:49.831792 | orchestrator | Monday 09 March 2026 00:51:22 +0000 (0:00:00.443) 0:04:00.191 ********** 2026-03-09 00:59:49.831797 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.831803 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.831808 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.831814 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-4, testbed-node-3, testbed-node-5 2026-03-09 00:59:49.831819 | orchestrator | 2026-03-09 00:59:49.831825 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-09 00:59:49.831830 | orchestrator | Monday 09 March 2026 00:51:23 +0000 (0:00:01.087) 0:04:01.278 ********** 2026-03-09 00:59:49.831835 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 00:59:49.831841 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 00:59:49.831846 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 00:59:49.831852 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.831857 | orchestrator | 2026-03-09 00:59:49.831863 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-09 00:59:49.831868 | orchestrator | Monday 09 March 2026 00:51:23 +0000 (0:00:00.484) 0:04:01.762 ********** 2026-03-09 00:59:49.831874 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.831879 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.831884 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.831890 | orchestrator | 2026-03-09 00:59:49.831895 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-09 00:59:49.831901 | orchestrator | Monday 09 March 2026 00:51:24 +0000 (0:00:00.445) 0:04:02.208 ********** 2026-03-09 00:59:49.831906 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.831911 | orchestrator | 2026-03-09 00:59:49.831917 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-09 00:59:49.831922 | orchestrator | Monday 09 March 2026 00:51:24 +0000 (0:00:00.264) 0:04:02.473 ********** 2026-03-09 00:59:49.831928 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.831933 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.831941 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.831947 | orchestrator | 2026-03-09 00:59:49.831952 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-09 00:59:49.831958 | orchestrator | Monday 09 March 2026 00:51:25 +0000 (0:00:00.356) 0:04:02.830 ********** 2026-03-09 00:59:49.831963 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.831969 | orchestrator | 2026-03-09 00:59:49.831978 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-09 00:59:49.831983 | orchestrator | Monday 09 March 2026 00:51:25 +0000 (0:00:00.231) 0:04:03.061 ********** 2026-03-09 00:59:49.831989 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.831994 | orchestrator | 2026-03-09 00:59:49.832000 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-09 00:59:49.832005 | orchestrator | Monday 09 March 2026 00:51:25 +0000 (0:00:00.251) 0:04:03.312 ********** 2026-03-09 00:59:49.832010 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.832016 | orchestrator | 2026-03-09 00:59:49.832021 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-09 00:59:49.832026 | orchestrator | Monday 09 March 2026 00:51:25 +0000 (0:00:00.173) 0:04:03.486 ********** 2026-03-09 00:59:49.832032 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.832037 | orchestrator | 2026-03-09 00:59:49.832043 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-09 00:59:49.832048 | orchestrator | Monday 09 March 2026 00:51:26 +0000 (0:00:00.837) 0:04:04.324 ********** 2026-03-09 00:59:49.832053 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.832059 | orchestrator | 2026-03-09 00:59:49.832064 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-09 00:59:49.832069 | orchestrator | Monday 09 March 2026 00:51:26 +0000 (0:00:00.250) 0:04:04.575 ********** 2026-03-09 00:59:49.832075 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 00:59:49.832080 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 00:59:49.832086 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 00:59:49.832091 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.832097 | orchestrator | 2026-03-09 00:59:49.832102 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-09 00:59:49.832119 | orchestrator | Monday 09 March 2026 00:51:27 +0000 (0:00:00.531) 0:04:05.106 ********** 2026-03-09 00:59:49.832124 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.832130 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.832135 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.832141 | orchestrator | 2026-03-09 00:59:49.832146 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-09 00:59:49.832152 | orchestrator | Monday 09 March 2026 00:51:27 +0000 (0:00:00.424) 0:04:05.530 ********** 2026-03-09 00:59:49.832157 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.832163 | orchestrator | 2026-03-09 00:59:49.832168 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-09 00:59:49.832173 | orchestrator | Monday 09 March 2026 00:51:28 +0000 (0:00:00.250) 0:04:05.781 ********** 2026-03-09 00:59:49.832179 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.832184 | orchestrator | 2026-03-09 00:59:49.832190 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-09 00:59:49.832195 | orchestrator | Monday 09 March 2026 00:51:28 +0000 (0:00:00.237) 0:04:06.019 ********** 2026-03-09 00:59:49.832200 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.832206 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.832211 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.832217 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:59:49.832223 | orchestrator | 2026-03-09 00:59:49.832228 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-09 00:59:49.832233 | orchestrator | Monday 09 March 2026 00:51:29 +0000 (0:00:01.286) 0:04:07.305 ********** 2026-03-09 00:59:49.832239 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.832244 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.832250 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.832255 | orchestrator | 2026-03-09 00:59:49.832261 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-09 00:59:49.832266 | orchestrator | Monday 09 March 2026 00:51:29 +0000 (0:00:00.409) 0:04:07.715 ********** 2026-03-09 00:59:49.832274 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:59:49.832279 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:59:49.832285 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:59:49.832290 | orchestrator | 2026-03-09 00:59:49.832295 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-09 00:59:49.832301 | orchestrator | Monday 09 March 2026 00:51:31 +0000 (0:00:01.275) 0:04:08.991 ********** 2026-03-09 00:59:49.832306 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 00:59:49.832312 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 00:59:49.832317 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 00:59:49.832323 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.832328 | orchestrator | 2026-03-09 00:59:49.832334 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-09 00:59:49.832339 | orchestrator | Monday 09 March 2026 00:51:32 +0000 (0:00:00.918) 0:04:09.910 ********** 2026-03-09 00:59:49.832344 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.832350 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.832355 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.832361 | orchestrator | 2026-03-09 00:59:49.832366 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-09 00:59:49.832372 | orchestrator | Monday 09 March 2026 00:51:32 +0000 (0:00:00.576) 0:04:10.486 ********** 2026-03-09 00:59:49.832377 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.832382 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.832388 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.832394 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:59:49.832399 | orchestrator | 2026-03-09 00:59:49.832407 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-09 00:59:49.832413 | orchestrator | Monday 09 March 2026 00:51:33 +0000 (0:00:00.871) 0:04:11.358 ********** 2026-03-09 00:59:49.832418 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.832424 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.832429 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.832435 | orchestrator | 2026-03-09 00:59:49.832447 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-09 00:59:49.832453 | orchestrator | Monday 09 March 2026 00:51:34 +0000 (0:00:00.617) 0:04:11.975 ********** 2026-03-09 00:59:49.832458 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:59:49.832463 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:59:49.832469 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:59:49.832474 | orchestrator | 2026-03-09 00:59:49.832479 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-09 00:59:49.832485 | orchestrator | Monday 09 March 2026 00:51:35 +0000 (0:00:01.201) 0:04:13.177 ********** 2026-03-09 00:59:49.832490 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 00:59:49.832496 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 00:59:49.832501 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 00:59:49.832506 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.832512 | orchestrator | 2026-03-09 00:59:49.832517 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-09 00:59:49.832523 | orchestrator | Monday 09 March 2026 00:51:36 +0000 (0:00:00.670) 0:04:13.848 ********** 2026-03-09 00:59:49.832528 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.832534 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.832539 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.832545 | orchestrator | 2026-03-09 00:59:49.832550 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-09 00:59:49.832555 | orchestrator | Monday 09 March 2026 00:51:36 +0000 (0:00:00.454) 0:04:14.302 ********** 2026-03-09 00:59:49.832565 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.832570 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.832576 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.832581 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.832587 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.832595 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.832601 | orchestrator | 2026-03-09 00:59:49.832607 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-09 00:59:49.832612 | orchestrator | Monday 09 March 2026 00:51:37 +0000 (0:00:01.034) 0:04:15.337 ********** 2026-03-09 00:59:49.832617 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.832623 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.832628 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.832634 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:49.832639 | orchestrator | 2026-03-09 00:59:49.832645 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-09 00:59:49.832650 | orchestrator | Monday 09 March 2026 00:51:38 +0000 (0:00:00.965) 0:04:16.303 ********** 2026-03-09 00:59:49.832656 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.832661 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.832666 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.832672 | orchestrator | 2026-03-09 00:59:49.832677 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-09 00:59:49.832683 | orchestrator | Monday 09 March 2026 00:51:39 +0000 (0:00:00.570) 0:04:16.873 ********** 2026-03-09 00:59:49.832688 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:49.832693 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:49.832699 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:49.832704 | orchestrator | 2026-03-09 00:59:49.832709 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-09 00:59:49.832723 | orchestrator | Monday 09 March 2026 00:51:40 +0000 (0:00:01.387) 0:04:18.261 ********** 2026-03-09 00:59:49.832729 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-09 00:59:49.832734 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-09 00:59:49.832740 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-09 00:59:49.832745 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.832751 | orchestrator | 2026-03-09 00:59:49.832756 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-09 00:59:49.832761 | orchestrator | Monday 09 March 2026 00:51:41 +0000 (0:00:00.899) 0:04:19.161 ********** 2026-03-09 00:59:49.832767 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.832772 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.832778 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.832783 | orchestrator | 2026-03-09 00:59:49.832788 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-03-09 00:59:49.832794 | orchestrator | 2026-03-09 00:59:49.832799 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-09 00:59:49.832805 | orchestrator | Monday 09 March 2026 00:51:42 +0000 (0:00:00.735) 0:04:19.896 ********** 2026-03-09 00:59:49.832810 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:49.832816 | orchestrator | 2026-03-09 00:59:49.832821 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-09 00:59:49.832826 | orchestrator | Monday 09 March 2026 00:51:43 +0000 (0:00:00.900) 0:04:20.797 ********** 2026-03-09 00:59:49.832832 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:49.832837 | orchestrator | 2026-03-09 00:59:49.832843 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-09 00:59:49.832848 | orchestrator | Monday 09 March 2026 00:51:43 +0000 (0:00:00.653) 0:04:21.450 ********** 2026-03-09 00:59:49.832858 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.832863 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.832868 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.832874 | orchestrator | 2026-03-09 00:59:49.832883 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-09 00:59:49.832888 | orchestrator | Monday 09 March 2026 00:51:44 +0000 (0:00:01.120) 0:04:22.571 ********** 2026-03-09 00:59:49.832893 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.832899 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.832904 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.832910 | orchestrator | 2026-03-09 00:59:49.832915 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-09 00:59:49.832921 | orchestrator | Monday 09 March 2026 00:51:45 +0000 (0:00:00.375) 0:04:22.947 ********** 2026-03-09 00:59:49.832926 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.832931 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.832937 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.832942 | orchestrator | 2026-03-09 00:59:49.832948 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-09 00:59:49.832953 | orchestrator | Monday 09 March 2026 00:51:45 +0000 (0:00:00.390) 0:04:23.337 ********** 2026-03-09 00:59:49.832958 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.832964 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.832969 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.832974 | orchestrator | 2026-03-09 00:59:49.832980 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-09 00:59:49.832985 | orchestrator | Monday 09 March 2026 00:51:45 +0000 (0:00:00.344) 0:04:23.681 ********** 2026-03-09 00:59:49.832991 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.832996 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.833001 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.833007 | orchestrator | 2026-03-09 00:59:49.833012 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-09 00:59:49.833018 | orchestrator | Monday 09 March 2026 00:51:47 +0000 (0:00:01.198) 0:04:24.880 ********** 2026-03-09 00:59:49.833023 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.833028 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.833034 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.833039 | orchestrator | 2026-03-09 00:59:49.833045 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-09 00:59:49.833050 | orchestrator | Monday 09 March 2026 00:51:47 +0000 (0:00:00.366) 0:04:25.246 ********** 2026-03-09 00:59:49.833059 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.833064 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.833077 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.833082 | orchestrator | 2026-03-09 00:59:49.833088 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-09 00:59:49.833093 | orchestrator | Monday 09 March 2026 00:51:47 +0000 (0:00:00.329) 0:04:25.576 ********** 2026-03-09 00:59:49.833099 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.833104 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.833109 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.833115 | orchestrator | 2026-03-09 00:59:49.833120 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-09 00:59:49.833126 | orchestrator | Monday 09 March 2026 00:51:48 +0000 (0:00:00.837) 0:04:26.414 ********** 2026-03-09 00:59:49.833131 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.833137 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.833142 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.833147 | orchestrator | 2026-03-09 00:59:49.833153 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-09 00:59:49.833158 | orchestrator | Monday 09 March 2026 00:51:49 +0000 (0:00:01.163) 0:04:27.578 ********** 2026-03-09 00:59:49.833164 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.833169 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.833178 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.833183 | orchestrator | 2026-03-09 00:59:49.833188 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-09 00:59:49.833194 | orchestrator | Monday 09 March 2026 00:51:50 +0000 (0:00:00.327) 0:04:27.905 ********** 2026-03-09 00:59:49.833199 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.833205 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.833210 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.833215 | orchestrator | 2026-03-09 00:59:49.833228 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-09 00:59:49.833233 | orchestrator | Monday 09 March 2026 00:51:50 +0000 (0:00:00.414) 0:04:28.320 ********** 2026-03-09 00:59:49.833238 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.833244 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.833256 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.833261 | orchestrator | 2026-03-09 00:59:49.833267 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-09 00:59:49.833272 | orchestrator | Monday 09 March 2026 00:51:50 +0000 (0:00:00.318) 0:04:28.638 ********** 2026-03-09 00:59:49.833277 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.833283 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.833288 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.833293 | orchestrator | 2026-03-09 00:59:49.833299 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-09 00:59:49.833304 | orchestrator | Monday 09 March 2026 00:51:51 +0000 (0:00:00.313) 0:04:28.952 ********** 2026-03-09 00:59:49.833310 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.833315 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.833321 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.833326 | orchestrator | 2026-03-09 00:59:49.833331 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-09 00:59:49.833337 | orchestrator | Monday 09 March 2026 00:51:51 +0000 (0:00:00.736) 0:04:29.689 ********** 2026-03-09 00:59:49.833342 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.833347 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.833353 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.833358 | orchestrator | 2026-03-09 00:59:49.833363 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-09 00:59:49.833369 | orchestrator | Monday 09 March 2026 00:51:52 +0000 (0:00:00.395) 0:04:30.084 ********** 2026-03-09 00:59:49.833374 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.833380 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.833385 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.833390 | orchestrator | 2026-03-09 00:59:49.833398 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-09 00:59:49.833404 | orchestrator | Monday 09 March 2026 00:51:52 +0000 (0:00:00.386) 0:04:30.470 ********** 2026-03-09 00:59:49.833409 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.833415 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.833420 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.833425 | orchestrator | 2026-03-09 00:59:49.833431 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-09 00:59:49.833436 | orchestrator | Monday 09 March 2026 00:51:53 +0000 (0:00:00.412) 0:04:30.883 ********** 2026-03-09 00:59:49.833442 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.833447 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.833452 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.833458 | orchestrator | 2026-03-09 00:59:49.833463 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-09 00:59:49.833469 | orchestrator | Monday 09 March 2026 00:51:53 +0000 (0:00:00.708) 0:04:31.592 ********** 2026-03-09 00:59:49.833474 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.833479 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.833485 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.833493 | orchestrator | 2026-03-09 00:59:49.833499 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-09 00:59:49.833504 | orchestrator | Monday 09 March 2026 00:51:54 +0000 (0:00:00.637) 0:04:32.229 ********** 2026-03-09 00:59:49.833510 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.833515 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.833520 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.833526 | orchestrator | 2026-03-09 00:59:49.833531 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-09 00:59:49.833536 | orchestrator | Monday 09 March 2026 00:51:54 +0000 (0:00:00.455) 0:04:32.685 ********** 2026-03-09 00:59:49.833542 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:49.833547 | orchestrator | 2026-03-09 00:59:49.833553 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-09 00:59:49.833558 | orchestrator | Monday 09 March 2026 00:51:55 +0000 (0:00:00.946) 0:04:33.631 ********** 2026-03-09 00:59:49.833563 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.833569 | orchestrator | 2026-03-09 00:59:49.833577 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-09 00:59:49.833582 | orchestrator | Monday 09 March 2026 00:51:56 +0000 (0:00:00.160) 0:04:33.791 ********** 2026-03-09 00:59:49.833588 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-09 00:59:49.833593 | orchestrator | 2026-03-09 00:59:49.833599 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-09 00:59:49.833604 | orchestrator | Monday 09 March 2026 00:51:57 +0000 (0:00:01.170) 0:04:34.961 ********** 2026-03-09 00:59:49.833610 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.833615 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.833621 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.833626 | orchestrator | 2026-03-09 00:59:49.833631 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-09 00:59:49.833637 | orchestrator | Monday 09 March 2026 00:51:57 +0000 (0:00:00.465) 0:04:35.427 ********** 2026-03-09 00:59:49.833642 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.833647 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.833653 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.833658 | orchestrator | 2026-03-09 00:59:49.833664 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-09 00:59:49.833669 | orchestrator | Monday 09 March 2026 00:51:58 +0000 (0:00:00.442) 0:04:35.869 ********** 2026-03-09 00:59:49.833674 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:49.833680 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:49.833685 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:49.833691 | orchestrator | 2026-03-09 00:59:49.833696 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-09 00:59:49.833702 | orchestrator | Monday 09 March 2026 00:52:00 +0000 (0:00:01.904) 0:04:37.774 ********** 2026-03-09 00:59:49.833707 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:49.833752 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:49.833758 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:49.833763 | orchestrator | 2026-03-09 00:59:49.833769 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-09 00:59:49.833774 | orchestrator | Monday 09 March 2026 00:52:00 +0000 (0:00:00.917) 0:04:38.692 ********** 2026-03-09 00:59:49.833780 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:49.833785 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:49.833790 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:49.833796 | orchestrator | 2026-03-09 00:59:49.833801 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-09 00:59:49.833806 | orchestrator | Monday 09 March 2026 00:52:01 +0000 (0:00:00.822) 0:04:39.515 ********** 2026-03-09 00:59:49.833812 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.833817 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.833823 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.833832 | orchestrator | 2026-03-09 00:59:49.833837 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-09 00:59:49.833843 | orchestrator | Monday 09 March 2026 00:52:02 +0000 (0:00:00.996) 0:04:40.512 ********** 2026-03-09 00:59:49.833848 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:49.833854 | orchestrator | 2026-03-09 00:59:49.833859 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-09 00:59:49.833864 | orchestrator | Monday 09 March 2026 00:52:04 +0000 (0:00:01.988) 0:04:42.500 ********** 2026-03-09 00:59:49.833870 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.833875 | orchestrator | 2026-03-09 00:59:49.833881 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-09 00:59:49.833886 | orchestrator | Monday 09 March 2026 00:52:05 +0000 (0:00:00.756) 0:04:43.256 ********** 2026-03-09 00:59:49.833891 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-09 00:59:49.833897 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:59:49.833902 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:59:49.833911 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-09 00:59:49.833916 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-09 00:59:49.833922 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-09 00:59:49.833927 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-09 00:59:49.833932 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-09 00:59:49.833938 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-03-09 00:59:49.833943 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-03-09 00:59:49.833949 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-09 00:59:49.833954 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-03-09 00:59:49.833959 | orchestrator | 2026-03-09 00:59:49.833965 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-09 00:59:49.833970 | orchestrator | Monday 09 March 2026 00:52:09 +0000 (0:00:03.524) 0:04:46.781 ********** 2026-03-09 00:59:49.833975 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:49.833981 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:49.833986 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:49.833991 | orchestrator | 2026-03-09 00:59:49.833997 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-09 00:59:49.834002 | orchestrator | Monday 09 March 2026 00:52:10 +0000 (0:00:01.472) 0:04:48.253 ********** 2026-03-09 00:59:49.834008 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.834093 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.834102 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.834115 | orchestrator | 2026-03-09 00:59:49.834121 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-09 00:59:49.834127 | orchestrator | Monday 09 March 2026 00:52:10 +0000 (0:00:00.322) 0:04:48.576 ********** 2026-03-09 00:59:49.834132 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.834137 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.834143 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.834148 | orchestrator | 2026-03-09 00:59:49.834153 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-09 00:59:49.834158 | orchestrator | Monday 09 March 2026 00:52:11 +0000 (0:00:00.503) 0:04:49.079 ********** 2026-03-09 00:59:49.834164 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:49.834189 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:49.834196 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:49.834202 | orchestrator | 2026-03-09 00:59:49.834207 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-09 00:59:49.834213 | orchestrator | Monday 09 March 2026 00:52:13 +0000 (0:00:01.905) 0:04:50.985 ********** 2026-03-09 00:59:49.834219 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:49.834231 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:49.834237 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:49.834242 | orchestrator | 2026-03-09 00:59:49.834248 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-09 00:59:49.834253 | orchestrator | Monday 09 March 2026 00:52:14 +0000 (0:00:01.448) 0:04:52.434 ********** 2026-03-09 00:59:49.834259 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.834264 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.834269 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.834275 | orchestrator | 2026-03-09 00:59:49.834280 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-09 00:59:49.834285 | orchestrator | Monday 09 March 2026 00:52:14 +0000 (0:00:00.279) 0:04:52.713 ********** 2026-03-09 00:59:49.834291 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:49.834296 | orchestrator | 2026-03-09 00:59:49.834302 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-09 00:59:49.834307 | orchestrator | Monday 09 March 2026 00:52:15 +0000 (0:00:00.683) 0:04:53.397 ********** 2026-03-09 00:59:49.834312 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.834318 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.834323 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.834328 | orchestrator | 2026-03-09 00:59:49.834333 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-09 00:59:49.834337 | orchestrator | Monday 09 March 2026 00:52:15 +0000 (0:00:00.364) 0:04:53.761 ********** 2026-03-09 00:59:49.834342 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.834347 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.834352 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.834356 | orchestrator | 2026-03-09 00:59:49.834361 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-09 00:59:49.834366 | orchestrator | Monday 09 March 2026 00:52:16 +0000 (0:00:00.384) 0:04:54.145 ********** 2026-03-09 00:59:49.834371 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:49.834376 | orchestrator | 2026-03-09 00:59:49.834380 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-09 00:59:49.834385 | orchestrator | Monday 09 March 2026 00:52:17 +0000 (0:00:00.793) 0:04:54.938 ********** 2026-03-09 00:59:49.834390 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:49.834395 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:49.834400 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:49.834404 | orchestrator | 2026-03-09 00:59:49.834409 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-09 00:59:49.834414 | orchestrator | Monday 09 March 2026 00:52:20 +0000 (0:00:03.462) 0:04:58.400 ********** 2026-03-09 00:59:49.834419 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:49.834423 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:49.834428 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:49.834433 | orchestrator | 2026-03-09 00:59:49.834437 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-09 00:59:49.834442 | orchestrator | Monday 09 March 2026 00:52:22 +0000 (0:00:01.534) 0:04:59.935 ********** 2026-03-09 00:59:49.834447 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:49.834452 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:49.834459 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:49.834464 | orchestrator | 2026-03-09 00:59:49.834469 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-09 00:59:49.834474 | orchestrator | Monday 09 March 2026 00:52:24 +0000 (0:00:02.177) 0:05:02.112 ********** 2026-03-09 00:59:49.834479 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:49.834483 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:49.834488 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:49.834496 | orchestrator | 2026-03-09 00:59:49.834501 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-09 00:59:49.834505 | orchestrator | Monday 09 March 2026 00:52:26 +0000 (0:00:02.265) 0:05:04.378 ********** 2026-03-09 00:59:49.834510 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-1, testbed-node-0, testbed-node-2 2026-03-09 00:59:49.834515 | orchestrator | 2026-03-09 00:59:49.834520 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-09 00:59:49.834525 | orchestrator | Monday 09 March 2026 00:52:27 +0000 (0:00:00.638) 0:05:05.016 ********** 2026-03-09 00:59:49.834529 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-09 00:59:49.834534 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.834539 | orchestrator | 2026-03-09 00:59:49.834544 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-09 00:59:49.834549 | orchestrator | Monday 09 March 2026 00:52:49 +0000 (0:00:21.911) 0:05:26.927 ********** 2026-03-09 00:59:49.834553 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.834558 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.834563 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.834568 | orchestrator | 2026-03-09 00:59:49.834572 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-09 00:59:49.834577 | orchestrator | Monday 09 March 2026 00:52:59 +0000 (0:00:09.851) 0:05:36.778 ********** 2026-03-09 00:59:49.834582 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.834587 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.834591 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.834596 | orchestrator | 2026-03-09 00:59:49.834601 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-09 00:59:49.834621 | orchestrator | Monday 09 March 2026 00:52:59 +0000 (0:00:00.609) 0:05:37.388 ********** 2026-03-09 00:59:49.834628 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6178ee46f61f373321abbfa8fb721bd2d38766c4'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-09 00:59:49.834634 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6178ee46f61f373321abbfa8fb721bd2d38766c4'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-09 00:59:49.834640 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6178ee46f61f373321abbfa8fb721bd2d38766c4'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-09 00:59:49.834646 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6178ee46f61f373321abbfa8fb721bd2d38766c4'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-09 00:59:49.834651 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6178ee46f61f373321abbfa8fb721bd2d38766c4'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-09 00:59:49.834657 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6178ee46f61f373321abbfa8fb721bd2d38766c4'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__6178ee46f61f373321abbfa8fb721bd2d38766c4'}])  2026-03-09 00:59:49.834666 | orchestrator | 2026-03-09 00:59:49.834671 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-09 00:59:49.834676 | orchestrator | Monday 09 March 2026 00:53:16 +0000 (0:00:16.501) 0:05:53.890 ********** 2026-03-09 00:59:49.834683 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.834688 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.834693 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.834698 | orchestrator | 2026-03-09 00:59:49.834703 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-09 00:59:49.834707 | orchestrator | Monday 09 March 2026 00:53:16 +0000 (0:00:00.471) 0:05:54.362 ********** 2026-03-09 00:59:49.834722 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:49.834727 | orchestrator | 2026-03-09 00:59:49.834732 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-09 00:59:49.834737 | orchestrator | Monday 09 March 2026 00:53:17 +0000 (0:00:00.872) 0:05:55.234 ********** 2026-03-09 00:59:49.834741 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.834746 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.834751 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.834756 | orchestrator | 2026-03-09 00:59:49.834761 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-09 00:59:49.834765 | orchestrator | Monday 09 March 2026 00:53:17 +0000 (0:00:00.351) 0:05:55.585 ********** 2026-03-09 00:59:49.834770 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.834775 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.834780 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.834784 | orchestrator | 2026-03-09 00:59:49.834789 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-09 00:59:49.834794 | orchestrator | Monday 09 March 2026 00:53:18 +0000 (0:00:00.387) 0:05:55.972 ********** 2026-03-09 00:59:49.834799 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-09 00:59:49.834804 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-09 00:59:49.834808 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-09 00:59:49.834813 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.834818 | orchestrator | 2026-03-09 00:59:49.834823 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-09 00:59:49.834827 | orchestrator | Monday 09 March 2026 00:53:19 +0000 (0:00:00.978) 0:05:56.951 ********** 2026-03-09 00:59:49.834832 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.834837 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.834857 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.834869 | orchestrator | 2026-03-09 00:59:49.834874 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-03-09 00:59:49.834879 | orchestrator | 2026-03-09 00:59:49.834884 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-09 00:59:49.834889 | orchestrator | Monday 09 March 2026 00:53:20 +0000 (0:00:00.925) 0:05:57.877 ********** 2026-03-09 00:59:49.834894 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:49.834899 | orchestrator | 2026-03-09 00:59:49.834904 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-09 00:59:49.834908 | orchestrator | Monday 09 March 2026 00:53:20 +0000 (0:00:00.635) 0:05:58.512 ********** 2026-03-09 00:59:49.834913 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:49.834921 | orchestrator | 2026-03-09 00:59:49.834926 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-09 00:59:49.834931 | orchestrator | Monday 09 March 2026 00:53:21 +0000 (0:00:00.948) 0:05:59.461 ********** 2026-03-09 00:59:49.834936 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.834940 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.834945 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.834950 | orchestrator | 2026-03-09 00:59:49.834955 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-09 00:59:49.834960 | orchestrator | Monday 09 March 2026 00:53:22 +0000 (0:00:00.850) 0:06:00.312 ********** 2026-03-09 00:59:49.834964 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.834969 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.834974 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.834979 | orchestrator | 2026-03-09 00:59:49.834983 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-09 00:59:49.834988 | orchestrator | Monday 09 March 2026 00:53:22 +0000 (0:00:00.379) 0:06:00.691 ********** 2026-03-09 00:59:49.834993 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.834998 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.835003 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.835007 | orchestrator | 2026-03-09 00:59:49.835012 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-09 00:59:49.835017 | orchestrator | Monday 09 March 2026 00:53:23 +0000 (0:00:00.677) 0:06:01.369 ********** 2026-03-09 00:59:49.835022 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.835027 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.835031 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.835036 | orchestrator | 2026-03-09 00:59:49.835041 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-09 00:59:49.835046 | orchestrator | Monday 09 March 2026 00:53:23 +0000 (0:00:00.353) 0:06:01.723 ********** 2026-03-09 00:59:49.835051 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.835055 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.835060 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.835065 | orchestrator | 2026-03-09 00:59:49.835070 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-09 00:59:49.835074 | orchestrator | Monday 09 March 2026 00:53:24 +0000 (0:00:00.738) 0:06:02.461 ********** 2026-03-09 00:59:49.835079 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.835084 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.835089 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.835093 | orchestrator | 2026-03-09 00:59:49.835098 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-09 00:59:49.835103 | orchestrator | Monday 09 March 2026 00:53:24 +0000 (0:00:00.289) 0:06:02.750 ********** 2026-03-09 00:59:49.835108 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.835116 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.835121 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.835134 | orchestrator | 2026-03-09 00:59:49.835139 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-09 00:59:49.835144 | orchestrator | Monday 09 March 2026 00:53:25 +0000 (0:00:00.563) 0:06:03.314 ********** 2026-03-09 00:59:49.835149 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.835154 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.835159 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.835164 | orchestrator | 2026-03-09 00:59:49.835168 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-09 00:59:49.835173 | orchestrator | Monday 09 March 2026 00:53:26 +0000 (0:00:00.920) 0:06:04.234 ********** 2026-03-09 00:59:49.835178 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.835183 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.835188 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.835192 | orchestrator | 2026-03-09 00:59:49.835197 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-09 00:59:49.835206 | orchestrator | Monday 09 March 2026 00:53:27 +0000 (0:00:00.794) 0:06:05.029 ********** 2026-03-09 00:59:49.835211 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.835216 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.835221 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.835225 | orchestrator | 2026-03-09 00:59:49.835230 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-09 00:59:49.835235 | orchestrator | Monday 09 March 2026 00:53:27 +0000 (0:00:00.297) 0:06:05.326 ********** 2026-03-09 00:59:49.835240 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.835245 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.835249 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.835254 | orchestrator | 2026-03-09 00:59:49.835259 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-09 00:59:49.835264 | orchestrator | Monday 09 March 2026 00:53:28 +0000 (0:00:00.496) 0:06:05.823 ********** 2026-03-09 00:59:49.835268 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.835279 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.835284 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.835289 | orchestrator | 2026-03-09 00:59:49.835294 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-09 00:59:49.835314 | orchestrator | Monday 09 March 2026 00:53:28 +0000 (0:00:00.299) 0:06:06.123 ********** 2026-03-09 00:59:49.835320 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.835324 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.835329 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.835341 | orchestrator | 2026-03-09 00:59:49.835346 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-09 00:59:49.835350 | orchestrator | Monday 09 March 2026 00:53:28 +0000 (0:00:00.345) 0:06:06.469 ********** 2026-03-09 00:59:49.835355 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.835360 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.835365 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.835370 | orchestrator | 2026-03-09 00:59:49.835375 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-09 00:59:49.835379 | orchestrator | Monday 09 March 2026 00:53:29 +0000 (0:00:00.328) 0:06:06.798 ********** 2026-03-09 00:59:49.835384 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.835389 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.835394 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.835399 | orchestrator | 2026-03-09 00:59:49.835403 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-09 00:59:49.835408 | orchestrator | Monday 09 March 2026 00:53:29 +0000 (0:00:00.323) 0:06:07.122 ********** 2026-03-09 00:59:49.835413 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.835418 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.835423 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.835427 | orchestrator | 2026-03-09 00:59:49.835432 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-09 00:59:49.835437 | orchestrator | Monday 09 March 2026 00:53:29 +0000 (0:00:00.523) 0:06:07.646 ********** 2026-03-09 00:59:49.835442 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.835447 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.835451 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.835456 | orchestrator | 2026-03-09 00:59:49.835461 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-09 00:59:49.835466 | orchestrator | Monday 09 March 2026 00:53:30 +0000 (0:00:00.426) 0:06:08.072 ********** 2026-03-09 00:59:49.835471 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.835475 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.835480 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.835485 | orchestrator | 2026-03-09 00:59:49.835490 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-09 00:59:49.835494 | orchestrator | Monday 09 March 2026 00:53:30 +0000 (0:00:00.375) 0:06:08.448 ********** 2026-03-09 00:59:49.835503 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.835508 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.835512 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.835517 | orchestrator | 2026-03-09 00:59:49.835522 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-09 00:59:49.835527 | orchestrator | Monday 09 March 2026 00:53:31 +0000 (0:00:00.823) 0:06:09.271 ********** 2026-03-09 00:59:49.835532 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-09 00:59:49.835537 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-09 00:59:49.835542 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-09 00:59:49.835546 | orchestrator | 2026-03-09 00:59:49.835551 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-09 00:59:49.835556 | orchestrator | Monday 09 March 2026 00:53:32 +0000 (0:00:00.688) 0:06:09.960 ********** 2026-03-09 00:59:49.835561 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:49.835566 | orchestrator | 2026-03-09 00:59:49.835571 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-09 00:59:49.835578 | orchestrator | Monday 09 March 2026 00:53:32 +0000 (0:00:00.571) 0:06:10.531 ********** 2026-03-09 00:59:49.835583 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:49.835588 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:49.835593 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:49.835597 | orchestrator | 2026-03-09 00:59:49.835602 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-09 00:59:49.835607 | orchestrator | Monday 09 March 2026 00:53:33 +0000 (0:00:00.772) 0:06:11.304 ********** 2026-03-09 00:59:49.835612 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.835617 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.835621 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.835626 | orchestrator | 2026-03-09 00:59:49.835631 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-09 00:59:49.835636 | orchestrator | Monday 09 March 2026 00:53:34 +0000 (0:00:00.742) 0:06:12.046 ********** 2026-03-09 00:59:49.835641 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-09 00:59:49.835646 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-09 00:59:49.835650 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-09 00:59:49.835655 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-09 00:59:49.835660 | orchestrator | 2026-03-09 00:59:49.835665 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-09 00:59:49.835670 | orchestrator | Monday 09 March 2026 00:53:45 +0000 (0:00:10.992) 0:06:23.038 ********** 2026-03-09 00:59:49.835675 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.835679 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.835684 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.835689 | orchestrator | 2026-03-09 00:59:49.835694 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-09 00:59:49.835699 | orchestrator | Monday 09 March 2026 00:53:45 +0000 (0:00:00.621) 0:06:23.659 ********** 2026-03-09 00:59:49.835703 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-09 00:59:49.835708 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-09 00:59:49.835721 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-09 00:59:49.835726 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:59:49.835731 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-09 00:59:49.835752 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:59:49.835758 | orchestrator | 2026-03-09 00:59:49.835762 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-09 00:59:49.835771 | orchestrator | Monday 09 March 2026 00:53:48 +0000 (0:00:02.983) 0:06:26.643 ********** 2026-03-09 00:59:49.835776 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-09 00:59:49.835781 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-09 00:59:49.835786 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-09 00:59:49.835790 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-09 00:59:49.835795 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-09 00:59:49.835800 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-09 00:59:49.835805 | orchestrator | 2026-03-09 00:59:49.835809 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-09 00:59:49.835814 | orchestrator | Monday 09 March 2026 00:53:50 +0000 (0:00:01.872) 0:06:28.516 ********** 2026-03-09 00:59:49.835819 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.835824 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.835829 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.835833 | orchestrator | 2026-03-09 00:59:49.835845 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-09 00:59:49.835850 | orchestrator | Monday 09 March 2026 00:53:51 +0000 (0:00:01.173) 0:06:29.690 ********** 2026-03-09 00:59:49.835854 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.835859 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.835864 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.835868 | orchestrator | 2026-03-09 00:59:49.835873 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-09 00:59:49.835878 | orchestrator | Monday 09 March 2026 00:53:52 +0000 (0:00:00.439) 0:06:30.129 ********** 2026-03-09 00:59:49.835883 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.835887 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.835892 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.835897 | orchestrator | 2026-03-09 00:59:49.835901 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-09 00:59:49.835906 | orchestrator | Monday 09 March 2026 00:53:52 +0000 (0:00:00.459) 0:06:30.589 ********** 2026-03-09 00:59:49.835911 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:49.835916 | orchestrator | 2026-03-09 00:59:49.835921 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-09 00:59:49.835925 | orchestrator | Monday 09 March 2026 00:53:53 +0000 (0:00:00.860) 0:06:31.449 ********** 2026-03-09 00:59:49.835930 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.835935 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.835940 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.835950 | orchestrator | 2026-03-09 00:59:49.835955 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-09 00:59:49.835960 | orchestrator | Monday 09 March 2026 00:53:54 +0000 (0:00:00.384) 0:06:31.834 ********** 2026-03-09 00:59:49.835965 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.835969 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.835974 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.835979 | orchestrator | 2026-03-09 00:59:49.835984 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-09 00:59:49.835988 | orchestrator | Monday 09 March 2026 00:53:54 +0000 (0:00:00.376) 0:06:32.211 ********** 2026-03-09 00:59:49.835993 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:49.835998 | orchestrator | 2026-03-09 00:59:49.836003 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-09 00:59:49.836010 | orchestrator | Monday 09 March 2026 00:53:55 +0000 (0:00:00.852) 0:06:33.063 ********** 2026-03-09 00:59:49.836015 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:49.836020 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:49.836025 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:49.836030 | orchestrator | 2026-03-09 00:59:49.836044 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-09 00:59:49.836049 | orchestrator | Monday 09 March 2026 00:53:56 +0000 (0:00:01.576) 0:06:34.639 ********** 2026-03-09 00:59:49.836054 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:49.836059 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:49.836063 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:49.836068 | orchestrator | 2026-03-09 00:59:49.836073 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-09 00:59:49.836077 | orchestrator | Monday 09 March 2026 00:53:58 +0000 (0:00:01.348) 0:06:35.988 ********** 2026-03-09 00:59:49.836082 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:49.836087 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:49.836092 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:49.836096 | orchestrator | 2026-03-09 00:59:49.836101 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-09 00:59:49.836112 | orchestrator | Monday 09 March 2026 00:54:00 +0000 (0:00:02.005) 0:06:37.994 ********** 2026-03-09 00:59:49.836117 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:49.836121 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:49.836126 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:49.836131 | orchestrator | 2026-03-09 00:59:49.836135 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-09 00:59:49.836140 | orchestrator | Monday 09 March 2026 00:54:03 +0000 (0:00:03.311) 0:06:41.305 ********** 2026-03-09 00:59:49.836145 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.836150 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.836154 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-09 00:59:49.836159 | orchestrator | 2026-03-09 00:59:49.836164 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-09 00:59:49.836169 | orchestrator | Monday 09 March 2026 00:54:03 +0000 (0:00:00.380) 0:06:41.686 ********** 2026-03-09 00:59:49.836189 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-03-09 00:59:49.836195 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-03-09 00:59:49.836200 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-03-09 00:59:49.836205 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-03-09 00:59:49.836210 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-03-09 00:59:49.836214 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2026-03-09 00:59:49.836219 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-09 00:59:49.836224 | orchestrator | 2026-03-09 00:59:49.836229 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-09 00:59:49.836233 | orchestrator | Monday 09 March 2026 00:54:40 +0000 (0:00:36.354) 0:07:18.040 ********** 2026-03-09 00:59:49.836238 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-09 00:59:49.836243 | orchestrator | 2026-03-09 00:59:49.836248 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-09 00:59:49.836253 | orchestrator | Monday 09 March 2026 00:54:41 +0000 (0:00:01.425) 0:07:19.466 ********** 2026-03-09 00:59:49.836257 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.836262 | orchestrator | 2026-03-09 00:59:49.836267 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-09 00:59:49.836271 | orchestrator | Monday 09 March 2026 00:54:42 +0000 (0:00:00.343) 0:07:19.809 ********** 2026-03-09 00:59:49.836276 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.836281 | orchestrator | 2026-03-09 00:59:49.836286 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-09 00:59:49.836294 | orchestrator | Monday 09 March 2026 00:54:42 +0000 (0:00:00.165) 0:07:19.974 ********** 2026-03-09 00:59:49.836299 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-03-09 00:59:49.836303 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-03-09 00:59:49.836308 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-03-09 00:59:49.836313 | orchestrator | 2026-03-09 00:59:49.836318 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-09 00:59:49.836323 | orchestrator | Monday 09 March 2026 00:54:49 +0000 (0:00:07.175) 0:07:27.150 ********** 2026-03-09 00:59:49.836327 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-09 00:59:49.836332 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-03-09 00:59:49.836337 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-03-09 00:59:49.836341 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-09 00:59:49.836346 | orchestrator | 2026-03-09 00:59:49.836351 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-09 00:59:49.836356 | orchestrator | Monday 09 March 2026 00:54:54 +0000 (0:00:05.457) 0:07:32.608 ********** 2026-03-09 00:59:49.836360 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:49.836365 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:49.836370 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:49.836375 | orchestrator | 2026-03-09 00:59:49.836380 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-09 00:59:49.836384 | orchestrator | Monday 09 March 2026 00:54:55 +0000 (0:00:00.764) 0:07:33.372 ********** 2026-03-09 00:59:49.836389 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:49.836394 | orchestrator | 2026-03-09 00:59:49.836399 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-09 00:59:49.836403 | orchestrator | Monday 09 March 2026 00:54:56 +0000 (0:00:00.907) 0:07:34.280 ********** 2026-03-09 00:59:49.836408 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.836413 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.836418 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.836422 | orchestrator | 2026-03-09 00:59:49.836428 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-09 00:59:49.836432 | orchestrator | Monday 09 March 2026 00:54:57 +0000 (0:00:00.534) 0:07:34.816 ********** 2026-03-09 00:59:49.836437 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:49.836442 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:49.836447 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:49.836452 | orchestrator | 2026-03-09 00:59:49.836457 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-09 00:59:49.836461 | orchestrator | Monday 09 March 2026 00:54:58 +0000 (0:00:01.378) 0:07:36.195 ********** 2026-03-09 00:59:49.836466 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-09 00:59:49.836471 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-09 00:59:49.836475 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-09 00:59:49.836480 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.836485 | orchestrator | 2026-03-09 00:59:49.836490 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-09 00:59:49.836495 | orchestrator | Monday 09 March 2026 00:54:59 +0000 (0:00:00.936) 0:07:37.131 ********** 2026-03-09 00:59:49.836499 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.836504 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.836509 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.836514 | orchestrator | 2026-03-09 00:59:49.836518 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-03-09 00:59:49.836523 | orchestrator | 2026-03-09 00:59:49.836528 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-09 00:59:49.836551 | orchestrator | Monday 09 March 2026 00:55:00 +0000 (0:00:00.861) 0:07:37.992 ********** 2026-03-09 00:59:49.836557 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:59:49.836562 | orchestrator | 2026-03-09 00:59:49.836567 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-09 00:59:49.836572 | orchestrator | Monday 09 March 2026 00:55:00 +0000 (0:00:00.559) 0:07:38.552 ********** 2026-03-09 00:59:49.836594 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:59:49.836599 | orchestrator | 2026-03-09 00:59:49.836611 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-09 00:59:49.836616 | orchestrator | Monday 09 March 2026 00:55:01 +0000 (0:00:00.810) 0:07:39.362 ********** 2026-03-09 00:59:49.836621 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.836626 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.836631 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.836635 | orchestrator | 2026-03-09 00:59:49.836640 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-09 00:59:49.836645 | orchestrator | Monday 09 March 2026 00:55:01 +0000 (0:00:00.347) 0:07:39.710 ********** 2026-03-09 00:59:49.836650 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.836655 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.836660 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.836664 | orchestrator | 2026-03-09 00:59:49.836669 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-09 00:59:49.836674 | orchestrator | Monday 09 March 2026 00:55:02 +0000 (0:00:00.823) 0:07:40.533 ********** 2026-03-09 00:59:49.836679 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.836683 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.836688 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.836693 | orchestrator | 2026-03-09 00:59:49.836698 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-09 00:59:49.836703 | orchestrator | Monday 09 March 2026 00:55:03 +0000 (0:00:00.777) 0:07:41.310 ********** 2026-03-09 00:59:49.836707 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.836719 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.836724 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.836728 | orchestrator | 2026-03-09 00:59:49.836733 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-09 00:59:49.836738 | orchestrator | Monday 09 March 2026 00:55:04 +0000 (0:00:01.086) 0:07:42.396 ********** 2026-03-09 00:59:49.836743 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.836748 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.836753 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.836757 | orchestrator | 2026-03-09 00:59:49.836762 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-09 00:59:49.836767 | orchestrator | Monday 09 March 2026 00:55:05 +0000 (0:00:00.459) 0:07:42.856 ********** 2026-03-09 00:59:49.836772 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.836776 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.836781 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.836786 | orchestrator | 2026-03-09 00:59:49.836791 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-09 00:59:49.836795 | orchestrator | Monday 09 March 2026 00:55:05 +0000 (0:00:00.478) 0:07:43.335 ********** 2026-03-09 00:59:49.836800 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.836805 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.836810 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.836815 | orchestrator | 2026-03-09 00:59:49.836819 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-09 00:59:49.836826 | orchestrator | Monday 09 March 2026 00:55:05 +0000 (0:00:00.369) 0:07:43.704 ********** 2026-03-09 00:59:49.836835 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.836839 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.836844 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.836849 | orchestrator | 2026-03-09 00:59:49.836854 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-09 00:59:49.836858 | orchestrator | Monday 09 March 2026 00:55:07 +0000 (0:00:01.606) 0:07:45.311 ********** 2026-03-09 00:59:49.836863 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.836868 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.836873 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.836877 | orchestrator | 2026-03-09 00:59:49.836882 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-09 00:59:49.836887 | orchestrator | Monday 09 March 2026 00:55:08 +0000 (0:00:01.076) 0:07:46.387 ********** 2026-03-09 00:59:49.836892 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.836896 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.836901 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.836906 | orchestrator | 2026-03-09 00:59:49.836911 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-09 00:59:49.836916 | orchestrator | Monday 09 March 2026 00:55:08 +0000 (0:00:00.342) 0:07:46.729 ********** 2026-03-09 00:59:49.836920 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.836925 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.836930 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.836935 | orchestrator | 2026-03-09 00:59:49.836939 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-09 00:59:49.836944 | orchestrator | Monday 09 March 2026 00:55:09 +0000 (0:00:00.347) 0:07:47.077 ********** 2026-03-09 00:59:49.836949 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.836954 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.836959 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.836963 | orchestrator | 2026-03-09 00:59:49.836968 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-09 00:59:49.836973 | orchestrator | Monday 09 March 2026 00:55:10 +0000 (0:00:00.732) 0:07:47.809 ********** 2026-03-09 00:59:49.836977 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.836989 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.836993 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.836998 | orchestrator | 2026-03-09 00:59:49.837003 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-09 00:59:49.837010 | orchestrator | Monday 09 March 2026 00:55:10 +0000 (0:00:00.498) 0:07:48.308 ********** 2026-03-09 00:59:49.837015 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.837020 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.837025 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.837029 | orchestrator | 2026-03-09 00:59:49.837034 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-09 00:59:49.837039 | orchestrator | Monday 09 March 2026 00:55:10 +0000 (0:00:00.386) 0:07:48.695 ********** 2026-03-09 00:59:49.837044 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.837049 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.837053 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.837058 | orchestrator | 2026-03-09 00:59:49.837063 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-09 00:59:49.837068 | orchestrator | Monday 09 March 2026 00:55:11 +0000 (0:00:00.356) 0:07:49.051 ********** 2026-03-09 00:59:49.837072 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.837087 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.837092 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.837096 | orchestrator | 2026-03-09 00:59:49.837101 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-09 00:59:49.837106 | orchestrator | Monday 09 March 2026 00:55:11 +0000 (0:00:00.593) 0:07:49.645 ********** 2026-03-09 00:59:49.837111 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.837115 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.837124 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.837129 | orchestrator | 2026-03-09 00:59:49.837134 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-09 00:59:49.837139 | orchestrator | Monday 09 March 2026 00:55:12 +0000 (0:00:00.383) 0:07:50.028 ********** 2026-03-09 00:59:49.837149 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.837154 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.837158 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.837163 | orchestrator | 2026-03-09 00:59:49.837168 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-09 00:59:49.837173 | orchestrator | Monday 09 March 2026 00:55:12 +0000 (0:00:00.426) 0:07:50.454 ********** 2026-03-09 00:59:49.837178 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.837182 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.837187 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.837192 | orchestrator | 2026-03-09 00:59:49.837196 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-09 00:59:49.837201 | orchestrator | Monday 09 March 2026 00:55:13 +0000 (0:00:01.073) 0:07:51.528 ********** 2026-03-09 00:59:49.837206 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.837211 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.837215 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.837220 | orchestrator | 2026-03-09 00:59:49.837225 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-09 00:59:49.837230 | orchestrator | Monday 09 March 2026 00:55:14 +0000 (0:00:00.442) 0:07:51.971 ********** 2026-03-09 00:59:49.837235 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-09 00:59:49.837239 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-09 00:59:49.837244 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-09 00:59:49.837249 | orchestrator | 2026-03-09 00:59:49.837254 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-09 00:59:49.837258 | orchestrator | Monday 09 March 2026 00:55:15 +0000 (0:00:00.827) 0:07:52.799 ********** 2026-03-09 00:59:49.837263 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:59:49.837268 | orchestrator | 2026-03-09 00:59:49.837273 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-09 00:59:49.837280 | orchestrator | Monday 09 March 2026 00:55:15 +0000 (0:00:00.623) 0:07:53.423 ********** 2026-03-09 00:59:49.837285 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.837289 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.837294 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.837299 | orchestrator | 2026-03-09 00:59:49.837304 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-09 00:59:49.837308 | orchestrator | Monday 09 March 2026 00:55:16 +0000 (0:00:00.693) 0:07:54.116 ********** 2026-03-09 00:59:49.837313 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.837318 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.837323 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.837327 | orchestrator | 2026-03-09 00:59:49.837332 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-09 00:59:49.837337 | orchestrator | Monday 09 March 2026 00:55:16 +0000 (0:00:00.391) 0:07:54.508 ********** 2026-03-09 00:59:49.837342 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.837352 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.837357 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.837362 | orchestrator | 2026-03-09 00:59:49.837367 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-09 00:59:49.837371 | orchestrator | Monday 09 March 2026 00:55:17 +0000 (0:00:00.800) 0:07:55.308 ********** 2026-03-09 00:59:49.837382 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.837387 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.837395 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.837400 | orchestrator | 2026-03-09 00:59:49.837404 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-09 00:59:49.837409 | orchestrator | Monday 09 March 2026 00:55:17 +0000 (0:00:00.413) 0:07:55.722 ********** 2026-03-09 00:59:49.837414 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-09 00:59:49.837419 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-09 00:59:49.837424 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-09 00:59:49.837431 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-09 00:59:49.837436 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-09 00:59:49.837441 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-09 00:59:49.837446 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-09 00:59:49.837450 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-09 00:59:49.837455 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-09 00:59:49.837460 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-09 00:59:49.837465 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-09 00:59:49.837470 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-09 00:59:49.837474 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-09 00:59:49.837479 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-09 00:59:49.837484 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-09 00:59:49.837489 | orchestrator | 2026-03-09 00:59:49.837494 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-09 00:59:49.837498 | orchestrator | Monday 09 March 2026 00:55:21 +0000 (0:00:03.446) 0:07:59.168 ********** 2026-03-09 00:59:49.837503 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.837508 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.837513 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.837517 | orchestrator | 2026-03-09 00:59:49.837522 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-09 00:59:49.837527 | orchestrator | Monday 09 March 2026 00:55:21 +0000 (0:00:00.313) 0:07:59.481 ********** 2026-03-09 00:59:49.837532 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:59:49.837537 | orchestrator | 2026-03-09 00:59:49.837541 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-09 00:59:49.837546 | orchestrator | Monday 09 March 2026 00:55:22 +0000 (0:00:00.543) 0:08:00.025 ********** 2026-03-09 00:59:49.837551 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-09 00:59:49.837555 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-09 00:59:49.837560 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-09 00:59:49.837571 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-09 00:59:49.837576 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-09 00:59:49.837581 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-09 00:59:49.837585 | orchestrator | 2026-03-09 00:59:49.837590 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-09 00:59:49.837595 | orchestrator | Monday 09 March 2026 00:55:23 +0000 (0:00:01.393) 0:08:01.418 ********** 2026-03-09 00:59:49.837603 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:59:49.837608 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-09 00:59:49.837613 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-09 00:59:49.837617 | orchestrator | 2026-03-09 00:59:49.837625 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-09 00:59:49.837630 | orchestrator | Monday 09 March 2026 00:55:26 +0000 (0:00:02.354) 0:08:03.773 ********** 2026-03-09 00:59:49.837635 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-09 00:59:49.837640 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-09 00:59:49.837644 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:59:49.837649 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-09 00:59:49.837654 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-09 00:59:49.837658 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:59:49.837663 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-09 00:59:49.837668 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-09 00:59:49.837673 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:59:49.837677 | orchestrator | 2026-03-09 00:59:49.837682 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-09 00:59:49.837687 | orchestrator | Monday 09 March 2026 00:55:27 +0000 (0:00:01.450) 0:08:05.223 ********** 2026-03-09 00:59:49.837692 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-09 00:59:49.837696 | orchestrator | 2026-03-09 00:59:49.837701 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-09 00:59:49.837706 | orchestrator | Monday 09 March 2026 00:55:29 +0000 (0:00:02.211) 0:08:07.435 ********** 2026-03-09 00:59:49.837734 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:59:49.837740 | orchestrator | 2026-03-09 00:59:49.837745 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-03-09 00:59:49.837750 | orchestrator | Monday 09 March 2026 00:55:30 +0000 (0:00:00.875) 0:08:08.310 ********** 2026-03-09 00:59:49.837754 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-0b4a24c5-7164-5e55-92cc-433a48be10d0', 'data_vg': 'ceph-0b4a24c5-7164-5e55-92cc-433a48be10d0'}) 2026-03-09 00:59:49.837759 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-9c74837a-43e3-5ea9-9fe0-5cec11260b17', 'data_vg': 'ceph-9c74837a-43e3-5ea9-9fe0-5cec11260b17'}) 2026-03-09 00:59:49.837767 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e95d8336-562c-5e60-938c-e1db43f5f553', 'data_vg': 'ceph-e95d8336-562c-5e60-938c-e1db43f5f553'}) 2026-03-09 00:59:49.837772 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-07cae8b8-d309-58e5-9f3f-3806cd3fe573', 'data_vg': 'ceph-07cae8b8-d309-58e5-9f3f-3806cd3fe573'}) 2026-03-09 00:59:49.837777 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c56389c1-f3b1-5ba6-b160-f425a16b3e47', 'data_vg': 'ceph-c56389c1-f3b1-5ba6-b160-f425a16b3e47'}) 2026-03-09 00:59:49.837782 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-590958f1-5006-5da8-896c-bdb08f0ac33f', 'data_vg': 'ceph-590958f1-5006-5da8-896c-bdb08f0ac33f'}) 2026-03-09 00:59:49.837787 | orchestrator | 2026-03-09 00:59:49.837798 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-09 00:59:49.837803 | orchestrator | Monday 09 March 2026 00:56:14 +0000 (0:00:44.122) 0:08:52.433 ********** 2026-03-09 00:59:49.837808 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.837813 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.837818 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.837822 | orchestrator | 2026-03-09 00:59:49.837827 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-09 00:59:49.837832 | orchestrator | Monday 09 March 2026 00:56:15 +0000 (0:00:00.366) 0:08:52.800 ********** 2026-03-09 00:59:49.837840 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:59:49.837845 | orchestrator | 2026-03-09 00:59:49.837856 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-09 00:59:49.837861 | orchestrator | Monday 09 March 2026 00:56:15 +0000 (0:00:00.867) 0:08:53.668 ********** 2026-03-09 00:59:49.837865 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.837870 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.837875 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.837880 | orchestrator | 2026-03-09 00:59:49.837884 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-09 00:59:49.837889 | orchestrator | Monday 09 March 2026 00:56:16 +0000 (0:00:00.698) 0:08:54.366 ********** 2026-03-09 00:59:49.837894 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.837899 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.837903 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.837908 | orchestrator | 2026-03-09 00:59:49.837913 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-09 00:59:49.837918 | orchestrator | Monday 09 March 2026 00:56:19 +0000 (0:00:02.889) 0:08:57.256 ********** 2026-03-09 00:59:49.837922 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:59:49.837927 | orchestrator | 2026-03-09 00:59:49.837932 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-09 00:59:49.837937 | orchestrator | Monday 09 March 2026 00:56:20 +0000 (0:00:00.897) 0:08:58.154 ********** 2026-03-09 00:59:49.837942 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:59:49.837946 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:59:49.837951 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:59:49.837956 | orchestrator | 2026-03-09 00:59:49.837961 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-09 00:59:49.837966 | orchestrator | Monday 09 March 2026 00:56:21 +0000 (0:00:01.280) 0:08:59.434 ********** 2026-03-09 00:59:49.837970 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:59:49.837975 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:59:49.837982 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:59:49.837986 | orchestrator | 2026-03-09 00:59:49.837991 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-09 00:59:49.837995 | orchestrator | Monday 09 March 2026 00:56:22 +0000 (0:00:01.287) 0:09:00.721 ********** 2026-03-09 00:59:49.838000 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:59:49.838004 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:59:49.838009 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:59:49.838040 | orchestrator | 2026-03-09 00:59:49.838045 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-09 00:59:49.838049 | orchestrator | Monday 09 March 2026 00:56:24 +0000 (0:00:01.904) 0:09:02.626 ********** 2026-03-09 00:59:49.838054 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.838058 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.838063 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.838067 | orchestrator | 2026-03-09 00:59:49.838072 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-09 00:59:49.838076 | orchestrator | Monday 09 March 2026 00:56:25 +0000 (0:00:00.642) 0:09:03.268 ********** 2026-03-09 00:59:49.838081 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.838085 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.838090 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.838094 | orchestrator | 2026-03-09 00:59:49.838099 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-09 00:59:49.838103 | orchestrator | Monday 09 March 2026 00:56:25 +0000 (0:00:00.340) 0:09:03.609 ********** 2026-03-09 00:59:49.838108 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-09 00:59:49.838112 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-03-09 00:59:49.838117 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-03-09 00:59:49.838131 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-03-09 00:59:49.838136 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-03-09 00:59:49.838140 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-03-09 00:59:49.838145 | orchestrator | 2026-03-09 00:59:49.838149 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-09 00:59:49.838154 | orchestrator | Monday 09 March 2026 00:56:26 +0000 (0:00:01.085) 0:09:04.694 ********** 2026-03-09 00:59:49.838159 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-09 00:59:49.838163 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-09 00:59:49.838170 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-03-09 00:59:49.838175 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-03-09 00:59:49.838179 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-03-09 00:59:49.838184 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-09 00:59:49.838188 | orchestrator | 2026-03-09 00:59:49.838193 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-09 00:59:49.838197 | orchestrator | Monday 09 March 2026 00:56:28 +0000 (0:00:02.015) 0:09:06.710 ********** 2026-03-09 00:59:49.838202 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-09 00:59:49.838207 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-09 00:59:49.838211 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-03-09 00:59:49.838215 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-03-09 00:59:49.838220 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-09 00:59:49.838224 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-03-09 00:59:49.838229 | orchestrator | 2026-03-09 00:59:49.838233 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-09 00:59:49.838238 | orchestrator | Monday 09 March 2026 00:56:32 +0000 (0:00:03.565) 0:09:10.275 ********** 2026-03-09 00:59:49.838243 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.838247 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.838252 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-09 00:59:49.838257 | orchestrator | 2026-03-09 00:59:49.838261 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-09 00:59:49.838266 | orchestrator | Monday 09 March 2026 00:56:35 +0000 (0:00:03.124) 0:09:13.399 ********** 2026-03-09 00:59:49.838270 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.838275 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.838279 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-09 00:59:49.838284 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-09 00:59:49.838288 | orchestrator | 2026-03-09 00:59:49.838293 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-09 00:59:49.838297 | orchestrator | Monday 09 March 2026 00:56:48 +0000 (0:00:12.376) 0:09:25.776 ********** 2026-03-09 00:59:49.838302 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.838306 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.838311 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.838315 | orchestrator | 2026-03-09 00:59:49.838320 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-09 00:59:49.838324 | orchestrator | Monday 09 March 2026 00:56:49 +0000 (0:00:01.133) 0:09:26.910 ********** 2026-03-09 00:59:49.838329 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.838333 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.838338 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.838342 | orchestrator | 2026-03-09 00:59:49.838347 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-09 00:59:49.838351 | orchestrator | Monday 09 March 2026 00:56:49 +0000 (0:00:00.376) 0:09:27.286 ********** 2026-03-09 00:59:49.838356 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:59:49.838360 | orchestrator | 2026-03-09 00:59:49.838369 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-09 00:59:49.838373 | orchestrator | Monday 09 March 2026 00:56:50 +0000 (0:00:01.009) 0:09:28.296 ********** 2026-03-09 00:59:49.838378 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 00:59:49.838382 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 00:59:49.838387 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 00:59:49.838394 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.838399 | orchestrator | 2026-03-09 00:59:49.838403 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-09 00:59:49.838408 | orchestrator | Monday 09 March 2026 00:56:50 +0000 (0:00:00.460) 0:09:28.756 ********** 2026-03-09 00:59:49.838412 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.838417 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.838421 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.838426 | orchestrator | 2026-03-09 00:59:49.838430 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-09 00:59:49.838435 | orchestrator | Monday 09 March 2026 00:56:51 +0000 (0:00:00.507) 0:09:29.264 ********** 2026-03-09 00:59:49.838439 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.838444 | orchestrator | 2026-03-09 00:59:49.838448 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-09 00:59:49.838453 | orchestrator | Monday 09 March 2026 00:56:51 +0000 (0:00:00.249) 0:09:29.514 ********** 2026-03-09 00:59:49.838463 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.838468 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.838472 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.838477 | orchestrator | 2026-03-09 00:59:49.838481 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-09 00:59:49.838486 | orchestrator | Monday 09 March 2026 00:56:52 +0000 (0:00:00.329) 0:09:29.843 ********** 2026-03-09 00:59:49.838490 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.838495 | orchestrator | 2026-03-09 00:59:49.838499 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-09 00:59:49.838504 | orchestrator | Monday 09 March 2026 00:56:52 +0000 (0:00:00.246) 0:09:30.089 ********** 2026-03-09 00:59:49.838508 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.838513 | orchestrator | 2026-03-09 00:59:49.838517 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-09 00:59:49.838522 | orchestrator | Monday 09 March 2026 00:56:52 +0000 (0:00:00.241) 0:09:30.331 ********** 2026-03-09 00:59:49.838526 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.838531 | orchestrator | 2026-03-09 00:59:49.838535 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-09 00:59:49.838540 | orchestrator | Monday 09 March 2026 00:56:52 +0000 (0:00:00.141) 0:09:30.473 ********** 2026-03-09 00:59:49.838552 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.838556 | orchestrator | 2026-03-09 00:59:49.838561 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-09 00:59:49.838565 | orchestrator | Monday 09 March 2026 00:56:52 +0000 (0:00:00.233) 0:09:30.706 ********** 2026-03-09 00:59:49.838570 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.838574 | orchestrator | 2026-03-09 00:59:49.838579 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-09 00:59:49.838583 | orchestrator | Monday 09 March 2026 00:56:53 +0000 (0:00:00.888) 0:09:31.595 ********** 2026-03-09 00:59:49.838588 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 00:59:49.838593 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 00:59:49.838597 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 00:59:49.838602 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.838606 | orchestrator | 2026-03-09 00:59:49.838611 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-09 00:59:49.838619 | orchestrator | Monday 09 March 2026 00:56:54 +0000 (0:00:00.407) 0:09:32.002 ********** 2026-03-09 00:59:49.838623 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.838628 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.838632 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.838637 | orchestrator | 2026-03-09 00:59:49.838641 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-09 00:59:49.838646 | orchestrator | Monday 09 March 2026 00:56:54 +0000 (0:00:00.342) 0:09:32.345 ********** 2026-03-09 00:59:49.838650 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.838655 | orchestrator | 2026-03-09 00:59:49.838659 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-09 00:59:49.838664 | orchestrator | Monday 09 March 2026 00:56:54 +0000 (0:00:00.267) 0:09:32.613 ********** 2026-03-09 00:59:49.838668 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.838673 | orchestrator | 2026-03-09 00:59:49.838677 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-03-09 00:59:49.838687 | orchestrator | 2026-03-09 00:59:49.838692 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-09 00:59:49.838697 | orchestrator | Monday 09 March 2026 00:56:55 +0000 (0:00:00.971) 0:09:33.584 ********** 2026-03-09 00:59:49.838701 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:49.838706 | orchestrator | 2026-03-09 00:59:49.838720 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-09 00:59:49.838725 | orchestrator | Monday 09 March 2026 00:56:57 +0000 (0:00:01.409) 0:09:34.994 ********** 2026-03-09 00:59:49.838730 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:49.838734 | orchestrator | 2026-03-09 00:59:49.838739 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-09 00:59:49.838743 | orchestrator | Monday 09 March 2026 00:56:58 +0000 (0:00:01.099) 0:09:36.094 ********** 2026-03-09 00:59:49.838748 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.838752 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.838757 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.838761 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.838766 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.838770 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.838775 | orchestrator | 2026-03-09 00:59:49.838779 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-09 00:59:49.838786 | orchestrator | Monday 09 March 2026 00:56:59 +0000 (0:00:01.335) 0:09:37.429 ********** 2026-03-09 00:59:49.838791 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.838795 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.838800 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.838804 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.838809 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.838813 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.838818 | orchestrator | 2026-03-09 00:59:49.838822 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-09 00:59:49.838827 | orchestrator | Monday 09 March 2026 00:57:00 +0000 (0:00:00.790) 0:09:38.220 ********** 2026-03-09 00:59:49.838831 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.838836 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.838840 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.838845 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.838849 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.838854 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.838858 | orchestrator | 2026-03-09 00:59:49.838863 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-09 00:59:49.838867 | orchestrator | Monday 09 March 2026 00:57:01 +0000 (0:00:01.012) 0:09:39.233 ********** 2026-03-09 00:59:49.838878 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.838883 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.838888 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.838892 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.838896 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.838901 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.838905 | orchestrator | 2026-03-09 00:59:49.838910 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-09 00:59:49.838914 | orchestrator | Monday 09 March 2026 00:57:02 +0000 (0:00:00.704) 0:09:39.937 ********** 2026-03-09 00:59:49.838919 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.838924 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.838928 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.838933 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.838937 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.838941 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.838946 | orchestrator | 2026-03-09 00:59:49.838950 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-09 00:59:49.838957 | orchestrator | Monday 09 March 2026 00:57:03 +0000 (0:00:01.322) 0:09:41.260 ********** 2026-03-09 00:59:49.838962 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.838967 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.838977 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.838982 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.838986 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.838991 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.838995 | orchestrator | 2026-03-09 00:59:49.839000 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-09 00:59:49.839004 | orchestrator | Monday 09 March 2026 00:57:04 +0000 (0:00:00.640) 0:09:41.901 ********** 2026-03-09 00:59:49.839009 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.839013 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.839018 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.839022 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.839027 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.839031 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.839036 | orchestrator | 2026-03-09 00:59:49.839040 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-09 00:59:49.839045 | orchestrator | Monday 09 March 2026 00:57:05 +0000 (0:00:01.003) 0:09:42.905 ********** 2026-03-09 00:59:49.839049 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.839054 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.839064 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.839069 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.839073 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.839078 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.839082 | orchestrator | 2026-03-09 00:59:49.839087 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-09 00:59:49.839091 | orchestrator | Monday 09 March 2026 00:57:06 +0000 (0:00:01.136) 0:09:44.041 ********** 2026-03-09 00:59:49.839096 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.839100 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.839105 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.839109 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.839114 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.839118 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.839123 | orchestrator | 2026-03-09 00:59:49.839127 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-09 00:59:49.839132 | orchestrator | Monday 09 March 2026 00:57:07 +0000 (0:00:01.544) 0:09:45.585 ********** 2026-03-09 00:59:49.839136 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.839141 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.839145 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.839150 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.839156 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.839161 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.839166 | orchestrator | 2026-03-09 00:59:49.839170 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-09 00:59:49.839175 | orchestrator | Monday 09 March 2026 00:57:08 +0000 (0:00:00.722) 0:09:46.307 ********** 2026-03-09 00:59:49.839179 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.839184 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.839188 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.839193 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.839197 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.839202 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.839206 | orchestrator | 2026-03-09 00:59:49.839211 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-09 00:59:49.839215 | orchestrator | Monday 09 March 2026 00:57:09 +0000 (0:00:01.013) 0:09:47.320 ********** 2026-03-09 00:59:49.839220 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.839224 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.839229 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.839233 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.839238 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.839242 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.839247 | orchestrator | 2026-03-09 00:59:49.839254 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-09 00:59:49.839259 | orchestrator | Monday 09 March 2026 00:57:10 +0000 (0:00:00.744) 0:09:48.065 ********** 2026-03-09 00:59:49.839263 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.839268 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.839272 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.839277 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.839281 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.839286 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.839290 | orchestrator | 2026-03-09 00:59:49.839295 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-09 00:59:49.839299 | orchestrator | Monday 09 March 2026 00:57:11 +0000 (0:00:01.031) 0:09:49.096 ********** 2026-03-09 00:59:49.839304 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.839308 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.839313 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.839317 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.839322 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.839326 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.839331 | orchestrator | 2026-03-09 00:59:49.839335 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-09 00:59:49.839340 | orchestrator | Monday 09 March 2026 00:57:12 +0000 (0:00:00.823) 0:09:49.920 ********** 2026-03-09 00:59:49.839344 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.839349 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.839353 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.839366 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.839370 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.839375 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.839379 | orchestrator | 2026-03-09 00:59:49.839384 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-09 00:59:49.839389 | orchestrator | Monday 09 March 2026 00:57:13 +0000 (0:00:00.966) 0:09:50.887 ********** 2026-03-09 00:59:49.839393 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.839398 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.839402 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.839406 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:49.839411 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:49.839415 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:49.839420 | orchestrator | 2026-03-09 00:59:49.839424 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-09 00:59:49.839435 | orchestrator | Monday 09 March 2026 00:57:13 +0000 (0:00:00.646) 0:09:51.533 ********** 2026-03-09 00:59:49.839440 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.839444 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.839449 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.839453 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.839458 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.839462 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.839467 | orchestrator | 2026-03-09 00:59:49.839472 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-09 00:59:49.839476 | orchestrator | Monday 09 March 2026 00:57:14 +0000 (0:00:00.978) 0:09:52.512 ********** 2026-03-09 00:59:49.839481 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.839485 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.839490 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.839494 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.839499 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.839503 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.839507 | orchestrator | 2026-03-09 00:59:49.839512 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-09 00:59:49.839516 | orchestrator | Monday 09 March 2026 00:57:15 +0000 (0:00:00.910) 0:09:53.422 ********** 2026-03-09 00:59:49.839521 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.839525 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.839530 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.839534 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.839539 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.839543 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.839548 | orchestrator | 2026-03-09 00:59:49.839552 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-09 00:59:49.839557 | orchestrator | Monday 09 March 2026 00:57:17 +0000 (0:00:01.542) 0:09:54.965 ********** 2026-03-09 00:59:49.839561 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-09 00:59:49.839566 | orchestrator | 2026-03-09 00:59:49.839570 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-09 00:59:49.839575 | orchestrator | Monday 09 March 2026 00:57:21 +0000 (0:00:04.012) 0:09:58.977 ********** 2026-03-09 00:59:49.839579 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-09 00:59:49.839584 | orchestrator | 2026-03-09 00:59:49.839588 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-09 00:59:49.839593 | orchestrator | Monday 09 March 2026 00:57:23 +0000 (0:00:02.161) 0:10:01.139 ********** 2026-03-09 00:59:49.839597 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:59:49.839602 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:59:49.839606 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:59:49.839611 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.839615 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:49.839620 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:49.839624 | orchestrator | 2026-03-09 00:59:49.839629 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-09 00:59:49.839634 | orchestrator | Monday 09 March 2026 00:57:25 +0000 (0:00:02.385) 0:10:03.524 ********** 2026-03-09 00:59:49.839638 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:59:49.839642 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:59:49.839647 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:59:49.839651 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:49.839656 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:49.839660 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:49.839665 | orchestrator | 2026-03-09 00:59:49.839669 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-09 00:59:49.839674 | orchestrator | Monday 09 March 2026 00:57:27 +0000 (0:00:01.303) 0:10:04.827 ********** 2026-03-09 00:59:49.839681 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:49.839691 | orchestrator | 2026-03-09 00:59:49.839695 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-09 00:59:49.839700 | orchestrator | Monday 09 March 2026 00:57:28 +0000 (0:00:01.422) 0:10:06.250 ********** 2026-03-09 00:59:49.839704 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:59:49.839709 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:59:49.839722 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:59:49.839727 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:49.839732 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:49.839736 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:49.839741 | orchestrator | 2026-03-09 00:59:49.839745 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-09 00:59:49.839750 | orchestrator | Monday 09 March 2026 00:57:30 +0000 (0:00:02.083) 0:10:08.333 ********** 2026-03-09 00:59:49.839754 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:59:49.839759 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:59:49.839763 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:59:49.839768 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:49.839772 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:49.839777 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:49.839781 | orchestrator | 2026-03-09 00:59:49.839786 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-03-09 00:59:49.839790 | orchestrator | Monday 09 March 2026 00:57:34 +0000 (0:00:04.068) 0:10:12.401 ********** 2026-03-09 00:59:49.839795 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:49.839799 | orchestrator | 2026-03-09 00:59:49.839804 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-03-09 00:59:49.839808 | orchestrator | Monday 09 March 2026 00:57:36 +0000 (0:00:01.430) 0:10:13.832 ********** 2026-03-09 00:59:49.839813 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.839817 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.839822 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.839827 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.839831 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.839835 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.839840 | orchestrator | 2026-03-09 00:59:49.839844 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-03-09 00:59:49.839851 | orchestrator | Monday 09 March 2026 00:57:36 +0000 (0:00:00.926) 0:10:14.758 ********** 2026-03-09 00:59:49.839856 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:59:49.839860 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:59:49.839865 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:59:49.839870 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:49.839874 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:49.839879 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:49.839883 | orchestrator | 2026-03-09 00:59:49.839888 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-03-09 00:59:49.839898 | orchestrator | Monday 09 March 2026 00:57:39 +0000 (0:00:02.302) 0:10:17.061 ********** 2026-03-09 00:59:49.839903 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.839908 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.839912 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.839917 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:49.839921 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:49.839926 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:49.839930 | orchestrator | 2026-03-09 00:59:49.839935 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-03-09 00:59:49.839939 | orchestrator | 2026-03-09 00:59:49.839944 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-09 00:59:49.839948 | orchestrator | Monday 09 March 2026 00:57:40 +0000 (0:00:01.219) 0:10:18.281 ********** 2026-03-09 00:59:49.839956 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:59:49.839961 | orchestrator | 2026-03-09 00:59:49.839965 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-09 00:59:49.839970 | orchestrator | Monday 09 March 2026 00:57:41 +0000 (0:00:00.547) 0:10:18.828 ********** 2026-03-09 00:59:49.839974 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:59:49.839979 | orchestrator | 2026-03-09 00:59:49.839983 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-09 00:59:49.839988 | orchestrator | Monday 09 March 2026 00:57:41 +0000 (0:00:00.884) 0:10:19.713 ********** 2026-03-09 00:59:49.839992 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.839997 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.840001 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.840006 | orchestrator | 2026-03-09 00:59:49.840011 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-09 00:59:49.840015 | orchestrator | Monday 09 March 2026 00:57:42 +0000 (0:00:00.380) 0:10:20.093 ********** 2026-03-09 00:59:49.840020 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.840024 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.840029 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.840033 | orchestrator | 2026-03-09 00:59:49.840038 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-09 00:59:49.840042 | orchestrator | Monday 09 March 2026 00:57:43 +0000 (0:00:00.832) 0:10:20.926 ********** 2026-03-09 00:59:49.840047 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.840051 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.840056 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.840060 | orchestrator | 2026-03-09 00:59:49.840065 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-09 00:59:49.840069 | orchestrator | Monday 09 March 2026 00:57:44 +0000 (0:00:01.143) 0:10:22.070 ********** 2026-03-09 00:59:49.840074 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.840078 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.840083 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.840087 | orchestrator | 2026-03-09 00:59:49.840092 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-09 00:59:49.840099 | orchestrator | Monday 09 March 2026 00:57:45 +0000 (0:00:00.759) 0:10:22.829 ********** 2026-03-09 00:59:49.840104 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.840108 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.840113 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.840117 | orchestrator | 2026-03-09 00:59:49.840122 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-09 00:59:49.840126 | orchestrator | Monday 09 March 2026 00:57:45 +0000 (0:00:00.364) 0:10:23.194 ********** 2026-03-09 00:59:49.840131 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.840135 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.840140 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.840144 | orchestrator | 2026-03-09 00:59:49.840149 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-09 00:59:49.840153 | orchestrator | Monday 09 March 2026 00:57:45 +0000 (0:00:00.371) 0:10:23.566 ********** 2026-03-09 00:59:49.840158 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.840162 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.840167 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.840171 | orchestrator | 2026-03-09 00:59:49.840176 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-09 00:59:49.840180 | orchestrator | Monday 09 March 2026 00:57:46 +0000 (0:00:00.609) 0:10:24.175 ********** 2026-03-09 00:59:49.840185 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.840189 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.840196 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.840201 | orchestrator | 2026-03-09 00:59:49.840206 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-09 00:59:49.840210 | orchestrator | Monday 09 March 2026 00:57:47 +0000 (0:00:00.782) 0:10:24.958 ********** 2026-03-09 00:59:49.840215 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.840225 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.840230 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.840234 | orchestrator | 2026-03-09 00:59:49.840239 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-09 00:59:49.840243 | orchestrator | Monday 09 March 2026 00:57:47 +0000 (0:00:00.774) 0:10:25.732 ********** 2026-03-09 00:59:49.840248 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.840252 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.840257 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.840261 | orchestrator | 2026-03-09 00:59:49.840266 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-09 00:59:49.840273 | orchestrator | Monday 09 March 2026 00:57:48 +0000 (0:00:00.362) 0:10:26.095 ********** 2026-03-09 00:59:49.840277 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.840282 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.840286 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.840291 | orchestrator | 2026-03-09 00:59:49.840296 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-09 00:59:49.840303 | orchestrator | Monday 09 March 2026 00:57:48 +0000 (0:00:00.654) 0:10:26.749 ********** 2026-03-09 00:59:49.840308 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.840312 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.840317 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.840321 | orchestrator | 2026-03-09 00:59:49.840326 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-09 00:59:49.840330 | orchestrator | Monday 09 March 2026 00:57:49 +0000 (0:00:00.331) 0:10:27.080 ********** 2026-03-09 00:59:49.840335 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.840339 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.840344 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.840348 | orchestrator | 2026-03-09 00:59:49.840353 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-09 00:59:49.840358 | orchestrator | Monday 09 March 2026 00:57:49 +0000 (0:00:00.373) 0:10:27.454 ********** 2026-03-09 00:59:49.840362 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.840367 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.840371 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.840376 | orchestrator | 2026-03-09 00:59:49.840380 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-09 00:59:49.840385 | orchestrator | Monday 09 March 2026 00:57:50 +0000 (0:00:00.341) 0:10:27.796 ********** 2026-03-09 00:59:49.840390 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.840394 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.840399 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.840403 | orchestrator | 2026-03-09 00:59:49.840408 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-09 00:59:49.840412 | orchestrator | Monday 09 March 2026 00:57:50 +0000 (0:00:00.645) 0:10:28.442 ********** 2026-03-09 00:59:49.840417 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.840421 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.840426 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.840431 | orchestrator | 2026-03-09 00:59:49.840435 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-09 00:59:49.840440 | orchestrator | Monday 09 March 2026 00:57:51 +0000 (0:00:00.327) 0:10:28.770 ********** 2026-03-09 00:59:49.840444 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.840455 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.840459 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.840464 | orchestrator | 2026-03-09 00:59:49.840469 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-09 00:59:49.840477 | orchestrator | Monday 09 March 2026 00:57:51 +0000 (0:00:00.314) 0:10:29.084 ********** 2026-03-09 00:59:49.840481 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.840486 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.840491 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.840495 | orchestrator | 2026-03-09 00:59:49.840500 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-09 00:59:49.840504 | orchestrator | Monday 09 March 2026 00:57:51 +0000 (0:00:00.438) 0:10:29.523 ********** 2026-03-09 00:59:49.840509 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.840513 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.840518 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.840522 | orchestrator | 2026-03-09 00:59:49.840527 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-09 00:59:49.840532 | orchestrator | Monday 09 March 2026 00:57:52 +0000 (0:00:00.858) 0:10:30.382 ********** 2026-03-09 00:59:49.840542 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.840549 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.840553 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-03-09 00:59:49.840558 | orchestrator | 2026-03-09 00:59:49.840563 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-03-09 00:59:49.840567 | orchestrator | Monday 09 March 2026 00:57:53 +0000 (0:00:00.459) 0:10:30.841 ********** 2026-03-09 00:59:49.840572 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-09 00:59:49.840576 | orchestrator | 2026-03-09 00:59:49.840581 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-03-09 00:59:49.840585 | orchestrator | Monday 09 March 2026 00:57:55 +0000 (0:00:02.226) 0:10:33.068 ********** 2026-03-09 00:59:49.840591 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-03-09 00:59:49.840596 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.840601 | orchestrator | 2026-03-09 00:59:49.840605 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-03-09 00:59:49.840610 | orchestrator | Monday 09 March 2026 00:57:55 +0000 (0:00:00.207) 0:10:33.276 ********** 2026-03-09 00:59:49.840615 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-09 00:59:49.840623 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-09 00:59:49.840628 | orchestrator | 2026-03-09 00:59:49.840635 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-03-09 00:59:49.840640 | orchestrator | Monday 09 March 2026 00:58:04 +0000 (0:00:08.819) 0:10:42.095 ********** 2026-03-09 00:59:49.840644 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-09 00:59:49.840649 | orchestrator | 2026-03-09 00:59:49.840654 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-09 00:59:49.840658 | orchestrator | Monday 09 March 2026 00:58:08 +0000 (0:00:03.840) 0:10:45.936 ********** 2026-03-09 00:59:49.840663 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:59:49.840667 | orchestrator | 2026-03-09 00:59:49.840672 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-09 00:59:49.840676 | orchestrator | Monday 09 March 2026 00:58:08 +0000 (0:00:00.642) 0:10:46.579 ********** 2026-03-09 00:59:49.840684 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-09 00:59:49.840689 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-09 00:59:49.840693 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-09 00:59:49.840698 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-09 00:59:49.840703 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-09 00:59:49.840707 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-09 00:59:49.840733 | orchestrator | 2026-03-09 00:59:49.840739 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-09 00:59:49.840743 | orchestrator | Monday 09 March 2026 00:58:09 +0000 (0:00:01.158) 0:10:47.737 ********** 2026-03-09 00:59:49.840748 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:59:49.840752 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-09 00:59:49.840757 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-09 00:59:49.840762 | orchestrator | 2026-03-09 00:59:49.840766 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-09 00:59:49.840771 | orchestrator | Monday 09 March 2026 00:58:12 +0000 (0:00:02.597) 0:10:50.334 ********** 2026-03-09 00:59:49.840775 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-09 00:59:49.840780 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-09 00:59:49.840784 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:59:49.840789 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-09 00:59:49.840794 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-09 00:59:49.840798 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:59:49.840803 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-09 00:59:49.840807 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-09 00:59:49.840812 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:59:49.840816 | orchestrator | 2026-03-09 00:59:49.840821 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-09 00:59:49.840825 | orchestrator | Monday 09 March 2026 00:58:14 +0000 (0:00:01.772) 0:10:52.106 ********** 2026-03-09 00:59:49.840830 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:59:49.840835 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:59:49.840839 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:59:49.840844 | orchestrator | 2026-03-09 00:59:49.840848 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-09 00:59:49.840853 | orchestrator | Monday 09 March 2026 00:58:17 +0000 (0:00:03.051) 0:10:55.157 ********** 2026-03-09 00:59:49.840857 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.840865 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.840869 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.840874 | orchestrator | 2026-03-09 00:59:49.840878 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-09 00:59:49.840883 | orchestrator | Monday 09 March 2026 00:58:17 +0000 (0:00:00.414) 0:10:55.572 ********** 2026-03-09 00:59:49.840887 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:59:49.840892 | orchestrator | 2026-03-09 00:59:49.840897 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-09 00:59:49.840901 | orchestrator | Monday 09 March 2026 00:58:18 +0000 (0:00:01.027) 0:10:56.600 ********** 2026-03-09 00:59:49.840906 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:59:49.840910 | orchestrator | 2026-03-09 00:59:49.840915 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-09 00:59:49.840919 | orchestrator | Monday 09 March 2026 00:58:19 +0000 (0:00:00.589) 0:10:57.189 ********** 2026-03-09 00:59:49.840924 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:59:49.840931 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:59:49.840936 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:59:49.840941 | orchestrator | 2026-03-09 00:59:49.840945 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-09 00:59:49.840950 | orchestrator | Monday 09 March 2026 00:58:20 +0000 (0:00:01.367) 0:10:58.556 ********** 2026-03-09 00:59:49.840954 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:59:49.840959 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:59:49.840963 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:59:49.840968 | orchestrator | 2026-03-09 00:59:49.840972 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-09 00:59:49.840977 | orchestrator | Monday 09 March 2026 00:58:22 +0000 (0:00:01.588) 0:11:00.144 ********** 2026-03-09 00:59:49.840982 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:59:49.840986 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:59:49.840991 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:59:49.840995 | orchestrator | 2026-03-09 00:59:49.841000 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-09 00:59:49.841007 | orchestrator | Monday 09 March 2026 00:58:24 +0000 (0:00:01.851) 0:11:01.995 ********** 2026-03-09 00:59:49.841011 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:59:49.841016 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:59:49.841030 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:59:49.841035 | orchestrator | 2026-03-09 00:59:49.841040 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-09 00:59:49.841044 | orchestrator | Monday 09 March 2026 00:58:26 +0000 (0:00:02.051) 0:11:04.047 ********** 2026-03-09 00:59:49.841049 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.841053 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.841058 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.841062 | orchestrator | 2026-03-09 00:59:49.841067 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-09 00:59:49.841071 | orchestrator | Monday 09 March 2026 00:58:28 +0000 (0:00:01.739) 0:11:05.787 ********** 2026-03-09 00:59:49.841076 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:59:49.841080 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:59:49.841085 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:59:49.841090 | orchestrator | 2026-03-09 00:59:49.841100 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-09 00:59:49.841105 | orchestrator | Monday 09 March 2026 00:58:28 +0000 (0:00:00.740) 0:11:06.528 ********** 2026-03-09 00:59:49.841110 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:59:49.841114 | orchestrator | 2026-03-09 00:59:49.841119 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-09 00:59:49.841123 | orchestrator | Monday 09 March 2026 00:58:29 +0000 (0:00:00.938) 0:11:07.466 ********** 2026-03-09 00:59:49.841128 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.841132 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.841137 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.841142 | orchestrator | 2026-03-09 00:59:49.841146 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-09 00:59:49.841151 | orchestrator | Monday 09 March 2026 00:58:30 +0000 (0:00:00.444) 0:11:07.911 ********** 2026-03-09 00:59:49.841155 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:59:49.841160 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:59:49.841165 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:59:49.841169 | orchestrator | 2026-03-09 00:59:49.841174 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-09 00:59:49.841178 | orchestrator | Monday 09 March 2026 00:58:31 +0000 (0:00:01.416) 0:11:09.327 ********** 2026-03-09 00:59:49.841183 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 00:59:49.841187 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 00:59:49.841196 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 00:59:49.841200 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.841210 | orchestrator | 2026-03-09 00:59:49.841215 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-09 00:59:49.841220 | orchestrator | Monday 09 March 2026 00:58:32 +0000 (0:00:01.161) 0:11:10.488 ********** 2026-03-09 00:59:49.841224 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.841229 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.841233 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.841238 | orchestrator | 2026-03-09 00:59:49.841243 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-09 00:59:49.841247 | orchestrator | 2026-03-09 00:59:49.841252 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-09 00:59:49.841256 | orchestrator | Monday 09 March 2026 00:58:33 +0000 (0:00:00.932) 0:11:11.421 ********** 2026-03-09 00:59:49.841261 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:59:49.841265 | orchestrator | 2026-03-09 00:59:49.841273 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-09 00:59:49.841278 | orchestrator | Monday 09 March 2026 00:58:34 +0000 (0:00:00.605) 0:11:12.027 ********** 2026-03-09 00:59:49.841282 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:59:49.841286 | orchestrator | 2026-03-09 00:59:49.841290 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-09 00:59:49.841295 | orchestrator | Monday 09 March 2026 00:58:35 +0000 (0:00:00.848) 0:11:12.875 ********** 2026-03-09 00:59:49.841299 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.841303 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.841307 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.841316 | orchestrator | 2026-03-09 00:59:49.841321 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-09 00:59:49.841325 | orchestrator | Monday 09 March 2026 00:58:35 +0000 (0:00:00.385) 0:11:13.261 ********** 2026-03-09 00:59:49.841329 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.841333 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.841337 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.841342 | orchestrator | 2026-03-09 00:59:49.841346 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-09 00:59:49.841350 | orchestrator | Monday 09 March 2026 00:58:36 +0000 (0:00:00.691) 0:11:13.952 ********** 2026-03-09 00:59:49.841354 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.841358 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.841362 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.841366 | orchestrator | 2026-03-09 00:59:49.841371 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-09 00:59:49.841375 | orchestrator | Monday 09 March 2026 00:58:37 +0000 (0:00:00.882) 0:11:14.835 ********** 2026-03-09 00:59:49.841379 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.841383 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.841387 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.841391 | orchestrator | 2026-03-09 00:59:49.841395 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-09 00:59:49.841400 | orchestrator | Monday 09 March 2026 00:58:37 +0000 (0:00:00.756) 0:11:15.592 ********** 2026-03-09 00:59:49.841404 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.841410 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.841415 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.841419 | orchestrator | 2026-03-09 00:59:49.841423 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-09 00:59:49.841427 | orchestrator | Monday 09 March 2026 00:58:38 +0000 (0:00:00.449) 0:11:16.041 ********** 2026-03-09 00:59:49.841431 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.841438 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.841442 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.841447 | orchestrator | 2026-03-09 00:59:49.841451 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-09 00:59:49.841455 | orchestrator | Monday 09 March 2026 00:58:38 +0000 (0:00:00.462) 0:11:16.504 ********** 2026-03-09 00:59:49.841459 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.841463 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.841467 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.841472 | orchestrator | 2026-03-09 00:59:49.841476 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-09 00:59:49.841480 | orchestrator | Monday 09 March 2026 00:58:39 +0000 (0:00:00.496) 0:11:17.000 ********** 2026-03-09 00:59:49.841484 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.841488 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.841492 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.841496 | orchestrator | 2026-03-09 00:59:49.841501 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-09 00:59:49.841505 | orchestrator | Monday 09 March 2026 00:58:39 +0000 (0:00:00.760) 0:11:17.760 ********** 2026-03-09 00:59:49.841514 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.841518 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.841522 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.841527 | orchestrator | 2026-03-09 00:59:49.841531 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-09 00:59:49.841535 | orchestrator | Monday 09 March 2026 00:58:40 +0000 (0:00:00.822) 0:11:18.583 ********** 2026-03-09 00:59:49.841539 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.841543 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.841547 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.841552 | orchestrator | 2026-03-09 00:59:49.841556 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-09 00:59:49.841560 | orchestrator | Monday 09 March 2026 00:58:41 +0000 (0:00:00.307) 0:11:18.890 ********** 2026-03-09 00:59:49.841564 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.841568 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.841572 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.841577 | orchestrator | 2026-03-09 00:59:49.841581 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-09 00:59:49.841585 | orchestrator | Monday 09 March 2026 00:58:41 +0000 (0:00:00.447) 0:11:19.338 ********** 2026-03-09 00:59:49.841589 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.841593 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.841597 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.841602 | orchestrator | 2026-03-09 00:59:49.841606 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-09 00:59:49.841610 | orchestrator | Monday 09 March 2026 00:58:41 +0000 (0:00:00.344) 0:11:19.683 ********** 2026-03-09 00:59:49.841614 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.841618 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.841622 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.841626 | orchestrator | 2026-03-09 00:59:49.841631 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-09 00:59:49.841635 | orchestrator | Monday 09 March 2026 00:58:42 +0000 (0:00:00.388) 0:11:20.072 ********** 2026-03-09 00:59:49.841639 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.841643 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.841647 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.841651 | orchestrator | 2026-03-09 00:59:49.841657 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-09 00:59:49.841662 | orchestrator | Monday 09 March 2026 00:58:42 +0000 (0:00:00.320) 0:11:20.392 ********** 2026-03-09 00:59:49.841666 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.841670 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.841674 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.841681 | orchestrator | 2026-03-09 00:59:49.841685 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-09 00:59:49.841689 | orchestrator | Monday 09 March 2026 00:58:43 +0000 (0:00:00.525) 0:11:20.917 ********** 2026-03-09 00:59:49.841694 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.841698 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.841702 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.841706 | orchestrator | 2026-03-09 00:59:49.841718 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-09 00:59:49.841722 | orchestrator | Monday 09 March 2026 00:58:43 +0000 (0:00:00.304) 0:11:21.221 ********** 2026-03-09 00:59:49.841726 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.841731 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.841735 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.841739 | orchestrator | 2026-03-09 00:59:49.841743 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-09 00:59:49.841747 | orchestrator | Monday 09 March 2026 00:58:43 +0000 (0:00:00.301) 0:11:21.523 ********** 2026-03-09 00:59:49.841751 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.841755 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.841760 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.841764 | orchestrator | 2026-03-09 00:59:49.841768 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-09 00:59:49.841772 | orchestrator | Monday 09 March 2026 00:58:44 +0000 (0:00:00.361) 0:11:21.884 ********** 2026-03-09 00:59:49.841776 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.841780 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.841785 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.841789 | orchestrator | 2026-03-09 00:59:49.841793 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-09 00:59:49.841797 | orchestrator | Monday 09 March 2026 00:58:44 +0000 (0:00:00.803) 0:11:22.688 ********** 2026-03-09 00:59:49.841803 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:59:49.841808 | orchestrator | 2026-03-09 00:59:49.841812 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-09 00:59:49.841816 | orchestrator | Monday 09 March 2026 00:58:45 +0000 (0:00:00.647) 0:11:23.336 ********** 2026-03-09 00:59:49.841820 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:59:49.841824 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-09 00:59:49.841828 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-09 00:59:49.841833 | orchestrator | 2026-03-09 00:59:49.841837 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-09 00:59:49.841841 | orchestrator | Monday 09 March 2026 00:58:47 +0000 (0:00:02.275) 0:11:25.611 ********** 2026-03-09 00:59:49.841845 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-09 00:59:49.841849 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-09 00:59:49.841853 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:59:49.841857 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-09 00:59:49.841862 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-09 00:59:49.841866 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:59:49.841870 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-09 00:59:49.841874 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-09 00:59:49.841878 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:59:49.841883 | orchestrator | 2026-03-09 00:59:49.841887 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-09 00:59:49.841891 | orchestrator | Monday 09 March 2026 00:58:49 +0000 (0:00:01.547) 0:11:27.159 ********** 2026-03-09 00:59:49.841895 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.841899 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.841903 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.841910 | orchestrator | 2026-03-09 00:59:49.841914 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-09 00:59:49.841919 | orchestrator | Monday 09 March 2026 00:58:49 +0000 (0:00:00.350) 0:11:27.510 ********** 2026-03-09 00:59:49.841923 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:59:49.841927 | orchestrator | 2026-03-09 00:59:49.841931 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-09 00:59:49.841935 | orchestrator | Monday 09 March 2026 00:58:50 +0000 (0:00:00.506) 0:11:28.016 ********** 2026-03-09 00:59:49.841939 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-09 00:59:49.841944 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-09 00:59:49.841948 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-09 00:59:49.841952 | orchestrator | 2026-03-09 00:59:49.841956 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-09 00:59:49.841961 | orchestrator | Monday 09 March 2026 00:58:51 +0000 (0:00:01.486) 0:11:29.503 ********** 2026-03-09 00:59:49.841965 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:59:49.841971 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-09 00:59:49.841975 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:59:49.841980 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-09 00:59:49.841984 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:59:49.841988 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-09 00:59:49.841992 | orchestrator | 2026-03-09 00:59:49.841996 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-09 00:59:49.842000 | orchestrator | Monday 09 March 2026 00:58:57 +0000 (0:00:05.555) 0:11:35.058 ********** 2026-03-09 00:59:49.842005 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:59:49.842009 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-09 00:59:49.842030 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:59:49.842034 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-09 00:59:49.842038 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:59:49.842042 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-09 00:59:49.842046 | orchestrator | 2026-03-09 00:59:49.842051 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-09 00:59:49.842055 | orchestrator | Monday 09 March 2026 00:58:59 +0000 (0:00:02.398) 0:11:37.456 ********** 2026-03-09 00:59:49.842065 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-09 00:59:49.842069 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:59:49.842073 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-09 00:59:49.842078 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:59:49.842082 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-09 00:59:49.842086 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:59:49.842090 | orchestrator | 2026-03-09 00:59:49.842097 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-09 00:59:49.842101 | orchestrator | Monday 09 March 2026 00:59:00 +0000 (0:00:01.215) 0:11:38.672 ********** 2026-03-09 00:59:49.842108 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-09 00:59:49.842113 | orchestrator | 2026-03-09 00:59:49.842117 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-09 00:59:49.842121 | orchestrator | Monday 09 March 2026 00:59:01 +0000 (0:00:00.265) 0:11:38.938 ********** 2026-03-09 00:59:49.842125 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-09 00:59:49.842130 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-09 00:59:49.842134 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-09 00:59:49.842138 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-09 00:59:49.842142 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-09 00:59:49.842146 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.842150 | orchestrator | 2026-03-09 00:59:49.842155 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-09 00:59:49.842159 | orchestrator | Monday 09 March 2026 00:59:02 +0000 (0:00:01.396) 0:11:40.334 ********** 2026-03-09 00:59:49.842163 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-09 00:59:49.842167 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-09 00:59:49.842171 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-09 00:59:49.842175 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-09 00:59:49.842180 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-09 00:59:49.842184 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.842188 | orchestrator | 2026-03-09 00:59:49.842192 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-09 00:59:49.842196 | orchestrator | Monday 09 March 2026 00:59:03 +0000 (0:00:00.660) 0:11:40.994 ********** 2026-03-09 00:59:49.842200 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-09 00:59:49.842205 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-09 00:59:49.842211 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-09 00:59:49.842216 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-09 00:59:49.842220 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-09 00:59:49.842224 | orchestrator | 2026-03-09 00:59:49.842228 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-09 00:59:49.842232 | orchestrator | Monday 09 March 2026 00:59:35 +0000 (0:00:32.599) 0:12:13.594 ********** 2026-03-09 00:59:49.842236 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.842240 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.842247 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.842251 | orchestrator | 2026-03-09 00:59:49.842255 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-09 00:59:49.842260 | orchestrator | Monday 09 March 2026 00:59:36 +0000 (0:00:00.328) 0:12:13.923 ********** 2026-03-09 00:59:49.842264 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.842268 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.842272 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.842276 | orchestrator | 2026-03-09 00:59:49.842280 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-09 00:59:49.842284 | orchestrator | Monday 09 March 2026 00:59:36 +0000 (0:00:00.317) 0:12:14.241 ********** 2026-03-09 00:59:49.842289 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:59:49.842298 | orchestrator | 2026-03-09 00:59:49.842302 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-09 00:59:49.842307 | orchestrator | Monday 09 March 2026 00:59:37 +0000 (0:00:00.820) 0:12:15.061 ********** 2026-03-09 00:59:49.842313 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:59:49.842317 | orchestrator | 2026-03-09 00:59:49.842321 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-09 00:59:49.842325 | orchestrator | Monday 09 March 2026 00:59:37 +0000 (0:00:00.582) 0:12:15.643 ********** 2026-03-09 00:59:49.842330 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:59:49.842334 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:59:49.842338 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:59:49.842342 | orchestrator | 2026-03-09 00:59:49.842346 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-09 00:59:49.842351 | orchestrator | Monday 09 March 2026 00:59:39 +0000 (0:00:01.355) 0:12:16.999 ********** 2026-03-09 00:59:49.842355 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:59:49.842359 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:59:49.842363 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:59:49.842367 | orchestrator | 2026-03-09 00:59:49.842371 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-09 00:59:49.842375 | orchestrator | Monday 09 March 2026 00:59:40 +0000 (0:00:01.667) 0:12:18.667 ********** 2026-03-09 00:59:49.842379 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:59:49.842383 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:59:49.842388 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:59:49.842392 | orchestrator | 2026-03-09 00:59:49.842396 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-09 00:59:49.842400 | orchestrator | Monday 09 March 2026 00:59:42 +0000 (0:00:02.085) 0:12:20.753 ********** 2026-03-09 00:59:49.842404 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-09 00:59:49.842408 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-09 00:59:49.842413 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-09 00:59:49.842417 | orchestrator | 2026-03-09 00:59:49.842426 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-09 00:59:49.842431 | orchestrator | Monday 09 March 2026 00:59:45 +0000 (0:00:02.912) 0:12:23.665 ********** 2026-03-09 00:59:49.842435 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.842439 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.842443 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.842447 | orchestrator | 2026-03-09 00:59:49.842451 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-09 00:59:49.842456 | orchestrator | Monday 09 March 2026 00:59:46 +0000 (0:00:00.379) 0:12:24.044 ********** 2026-03-09 00:59:49.842462 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:59:49.842466 | orchestrator | 2026-03-09 00:59:49.842470 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-09 00:59:49.842475 | orchestrator | Monday 09 March 2026 00:59:46 +0000 (0:00:00.483) 0:12:24.528 ********** 2026-03-09 00:59:49.842479 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.842483 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.842487 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.842491 | orchestrator | 2026-03-09 00:59:49.842495 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-09 00:59:49.842500 | orchestrator | Monday 09 March 2026 00:59:47 +0000 (0:00:00.490) 0:12:25.019 ********** 2026-03-09 00:59:49.842504 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.842508 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:49.842514 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:49.842518 | orchestrator | 2026-03-09 00:59:49.842522 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-09 00:59:49.842526 | orchestrator | Monday 09 March 2026 00:59:47 +0000 (0:00:00.298) 0:12:25.317 ********** 2026-03-09 00:59:49.842530 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 00:59:49.842535 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 00:59:49.842539 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 00:59:49.842543 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:49.842547 | orchestrator | 2026-03-09 00:59:49.842551 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-09 00:59:49.842555 | orchestrator | Monday 09 March 2026 00:59:48 +0000 (0:00:00.630) 0:12:25.948 ********** 2026-03-09 00:59:49.842559 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:49.842564 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:49.842568 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:49.842572 | orchestrator | 2026-03-09 00:59:49.842576 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:59:49.842580 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-03-09 00:59:49.842584 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-03-09 00:59:49.842589 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-03-09 00:59:49.842593 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-03-09 00:59:49.842597 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-03-09 00:59:49.842604 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-03-09 00:59:49.842608 | orchestrator | 2026-03-09 00:59:49.842612 | orchestrator | 2026-03-09 00:59:49.842617 | orchestrator | 2026-03-09 00:59:49.842621 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:59:49.842625 | orchestrator | Monday 09 March 2026 00:59:48 +0000 (0:00:00.275) 0:12:26.223 ********** 2026-03-09 00:59:49.842629 | orchestrator | =============================================================================== 2026-03-09 00:59:49.842633 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 47.50s 2026-03-09 00:59:49.842637 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 44.12s 2026-03-09 00:59:49.842642 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.35s 2026-03-09 00:59:49.842650 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 32.60s 2026-03-09 00:59:49.842654 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.91s 2026-03-09 00:59:49.842658 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 16.50s 2026-03-09 00:59:49.842662 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.38s 2026-03-09 00:59:49.842666 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.99s 2026-03-09 00:59:49.842670 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.85s 2026-03-09 00:59:49.842674 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.82s 2026-03-09 00:59:49.842679 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.75s 2026-03-09 00:59:49.842683 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 7.17s 2026-03-09 00:59:49.842687 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 5.56s 2026-03-09 00:59:49.842691 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.46s 2026-03-09 00:59:49.842695 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 5.17s 2026-03-09 00:59:49.842699 | orchestrator | ceph-facts : Set_fact _container_exec_cmd ------------------------------- 4.12s 2026-03-09 00:59:49.842703 | orchestrator | ceph-config : Generate Ceph file ---------------------------------------- 4.09s 2026-03-09 00:59:49.842707 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 4.07s 2026-03-09 00:59:49.842741 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.01s 2026-03-09 00:59:49.842746 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 3.98s 2026-03-09 00:59:49.842751 | orchestrator | 2026-03-09 00:59:49 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 00:59:49.842755 | orchestrator | 2026-03-09 00:59:49 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:52.869701 | orchestrator | 2026-03-09 00:59:52 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 00:59:52.870948 | orchestrator | 2026-03-09 00:59:52 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 00:59:52.872850 | orchestrator | 2026-03-09 00:59:52 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 00:59:52.872900 | orchestrator | 2026-03-09 00:59:52 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:55.917431 | orchestrator | 2026-03-09 00:59:55 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 00:59:55.920235 | orchestrator | 2026-03-09 00:59:55 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 00:59:55.922904 | orchestrator | 2026-03-09 00:59:55 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 00:59:55.922928 | orchestrator | 2026-03-09 00:59:55 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:58.967789 | orchestrator | 2026-03-09 00:59:58 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 00:59:58.968392 | orchestrator | 2026-03-09 00:59:58 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 00:59:58.972269 | orchestrator | 2026-03-09 00:59:58 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 00:59:58.973476 | orchestrator | 2026-03-09 00:59:58 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:02.024086 | orchestrator | 2026-03-09 01:00:02 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:00:02.025642 | orchestrator | 2026-03-09 01:00:02 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 01:00:02.028546 | orchestrator | 2026-03-09 01:00:02 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 01:00:02.028597 | orchestrator | 2026-03-09 01:00:02 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:05.070082 | orchestrator | 2026-03-09 01:00:05 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:00:05.071106 | orchestrator | 2026-03-09 01:00:05 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 01:00:05.072220 | orchestrator | 2026-03-09 01:00:05 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 01:00:05.072274 | orchestrator | 2026-03-09 01:00:05 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:08.117933 | orchestrator | 2026-03-09 01:00:08 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:00:08.118112 | orchestrator | 2026-03-09 01:00:08 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 01:00:08.118359 | orchestrator | 2026-03-09 01:00:08 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 01:00:08.118864 | orchestrator | 2026-03-09 01:00:08 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:11.162349 | orchestrator | 2026-03-09 01:00:11 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:00:11.163259 | orchestrator | 2026-03-09 01:00:11 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 01:00:11.165047 | orchestrator | 2026-03-09 01:00:11 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 01:00:11.165102 | orchestrator | 2026-03-09 01:00:11 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:14.218954 | orchestrator | 2026-03-09 01:00:14 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:00:14.219480 | orchestrator | 2026-03-09 01:00:14 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 01:00:14.220479 | orchestrator | 2026-03-09 01:00:14 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 01:00:14.220505 | orchestrator | 2026-03-09 01:00:14 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:17.265977 | orchestrator | 2026-03-09 01:00:17 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:00:17.267465 | orchestrator | 2026-03-09 01:00:17 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 01:00:17.269938 | orchestrator | 2026-03-09 01:00:17 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 01:00:17.269981 | orchestrator | 2026-03-09 01:00:17 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:20.311456 | orchestrator | 2026-03-09 01:00:20 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:00:20.317524 | orchestrator | 2026-03-09 01:00:20 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 01:00:20.317602 | orchestrator | 2026-03-09 01:00:20 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 01:00:20.317612 | orchestrator | 2026-03-09 01:00:20 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:23.365149 | orchestrator | 2026-03-09 01:00:23 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:00:23.365238 | orchestrator | 2026-03-09 01:00:23 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 01:00:23.371893 | orchestrator | 2026-03-09 01:00:23 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 01:00:23.371961 | orchestrator | 2026-03-09 01:00:23 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:26.434808 | orchestrator | 2026-03-09 01:00:26 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:00:26.436202 | orchestrator | 2026-03-09 01:00:26 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 01:00:26.439385 | orchestrator | 2026-03-09 01:00:26 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 01:00:26.439826 | orchestrator | 2026-03-09 01:00:26 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:29.517459 | orchestrator | 2026-03-09 01:00:29 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:00:29.517555 | orchestrator | 2026-03-09 01:00:29 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 01:00:29.517576 | orchestrator | 2026-03-09 01:00:29 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 01:00:29.517594 | orchestrator | 2026-03-09 01:00:29 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:32.594353 | orchestrator | 2026-03-09 01:00:32 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:00:32.595895 | orchestrator | 2026-03-09 01:00:32 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 01:00:32.597344 | orchestrator | 2026-03-09 01:00:32 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 01:00:32.597380 | orchestrator | 2026-03-09 01:00:32 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:35.644523 | orchestrator | 2026-03-09 01:00:35 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:00:35.646872 | orchestrator | 2026-03-09 01:00:35 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 01:00:35.650428 | orchestrator | 2026-03-09 01:00:35 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 01:00:35.650498 | orchestrator | 2026-03-09 01:00:35 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:38.707576 | orchestrator | 2026-03-09 01:00:38 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:00:38.710266 | orchestrator | 2026-03-09 01:00:38 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 01:00:38.712759 | orchestrator | 2026-03-09 01:00:38 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 01:00:38.712812 | orchestrator | 2026-03-09 01:00:38 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:41.775042 | orchestrator | 2026-03-09 01:00:41 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:00:41.777923 | orchestrator | 2026-03-09 01:00:41 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 01:00:41.782472 | orchestrator | 2026-03-09 01:00:41 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 01:00:41.782543 | orchestrator | 2026-03-09 01:00:41 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:44.834193 | orchestrator | 2026-03-09 01:00:44 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:00:44.838506 | orchestrator | 2026-03-09 01:00:44 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 01:00:44.841482 | orchestrator | 2026-03-09 01:00:44 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 01:00:44.841570 | orchestrator | 2026-03-09 01:00:44 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:47.888468 | orchestrator | 2026-03-09 01:00:47 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:00:47.889339 | orchestrator | 2026-03-09 01:00:47 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 01:00:47.890556 | orchestrator | 2026-03-09 01:00:47 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 01:00:47.890607 | orchestrator | 2026-03-09 01:00:47 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:50.950514 | orchestrator | 2026-03-09 01:00:50 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:00:50.952415 | orchestrator | 2026-03-09 01:00:50 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 01:00:50.954174 | orchestrator | 2026-03-09 01:00:50 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 01:00:50.954267 | orchestrator | 2026-03-09 01:00:50 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:54.014210 | orchestrator | 2026-03-09 01:00:54 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:00:54.016751 | orchestrator | 2026-03-09 01:00:54 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state STARTED 2026-03-09 01:00:54.019121 | orchestrator | 2026-03-09 01:00:54 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 01:00:54.019191 | orchestrator | 2026-03-09 01:00:54 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:57.083707 | orchestrator | 2026-03-09 01:00:57 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:00:57.085214 | orchestrator | 2026-03-09 01:00:57 | INFO  | Task 8d5f570a-217b-4502-951f-b6648a08310f is in state SUCCESS 2026-03-09 01:00:57.086971 | orchestrator | 2026-03-09 01:00:57.087010 | orchestrator | 2026-03-09 01:00:57.087022 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:00:57.087034 | orchestrator | 2026-03-09 01:00:57.087046 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:00:57.087058 | orchestrator | Monday 09 March 2026 00:58:00 +0000 (0:00:00.277) 0:00:00.277 ********** 2026-03-09 01:00:57.087069 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:00:57.087080 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:00:57.087090 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:00:57.087100 | orchestrator | 2026-03-09 01:00:57.087110 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:00:57.087122 | orchestrator | Monday 09 March 2026 00:58:00 +0000 (0:00:00.303) 0:00:00.580 ********** 2026-03-09 01:00:57.087133 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-09 01:00:57.087144 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-09 01:00:57.087155 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-09 01:00:57.087165 | orchestrator | 2026-03-09 01:00:57.087174 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-09 01:00:57.087184 | orchestrator | 2026-03-09 01:00:57.087195 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-09 01:00:57.087206 | orchestrator | Monday 09 March 2026 00:58:00 +0000 (0:00:00.453) 0:00:01.033 ********** 2026-03-09 01:00:57.087217 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:00:57.087228 | orchestrator | 2026-03-09 01:00:57.087239 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-09 01:00:57.087250 | orchestrator | Monday 09 March 2026 00:58:01 +0000 (0:00:00.556) 0:00:01.590 ********** 2026-03-09 01:00:57.087281 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-09 01:00:57.087288 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-09 01:00:57.087295 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-09 01:00:57.087301 | orchestrator | 2026-03-09 01:00:57.087307 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-09 01:00:57.087314 | orchestrator | Monday 09 March 2026 00:58:02 +0000 (0:00:00.721) 0:00:02.312 ********** 2026-03-09 01:00:57.087324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:00:57.087346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:00:57.087363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:00:57.087372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-09 01:00:57.087386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-09 01:00:57.087397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-09 01:00:57.087404 | orchestrator | 2026-03-09 01:00:57.087411 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-09 01:00:57.087417 | orchestrator | Monday 09 March 2026 00:58:03 +0000 (0:00:01.916) 0:00:04.229 ********** 2026-03-09 01:00:57.087424 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:00:57.087430 | orchestrator | 2026-03-09 01:00:57.087436 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-09 01:00:57.087449 | orchestrator | Monday 09 March 2026 00:58:04 +0000 (0:00:00.611) 0:00:04.840 ********** 2026-03-09 01:00:57.087456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:00:57.087467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:00:57.087474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:00:57.087484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-09 01:00:57.087499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-09 01:00:57.087511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-09 01:00:57.087518 | orchestrator | 2026-03-09 01:00:57.087525 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-09 01:00:57.087531 | orchestrator | Monday 09 March 2026 00:58:07 +0000 (0:00:02.784) 0:00:07.625 ********** 2026-03-09 01:00:57.087542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:00:57.087556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-09 01:00:57.087564 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:00:57.087573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:00:57.087586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:00:57.087598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-09 01:00:57.087606 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:00:57.087619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-09 01:00:57.087632 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:00:57.087682 | orchestrator | 2026-03-09 01:00:57.087690 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-09 01:00:57.087696 | orchestrator | Monday 09 March 2026 00:58:08 +0000 (0:00:01.446) 0:00:09.071 ********** 2026-03-09 01:00:57.087703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:00:57.087710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-09 01:00:57.087717 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:00:57.087728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:00:57.087741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-09 01:00:57.087752 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:00:57.087759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:00:57.087766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-09 01:00:57.087773 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:00:57.087779 | orchestrator | 2026-03-09 01:00:57.087785 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-09 01:00:57.087795 | orchestrator | Monday 09 March 2026 00:58:10 +0000 (0:00:01.246) 0:00:10.318 ********** 2026-03-09 01:00:57.087802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:00:57.087815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:00:57.087832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:00:57.087839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-09 01:00:57.087850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-09 01:00:57.087869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-09 01:00:57.087877 | orchestrator | 2026-03-09 01:00:57.087883 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-09 01:00:57.087890 | orchestrator | Monday 09 March 2026 00:58:12 +0000 (0:00:02.628) 0:00:12.946 ********** 2026-03-09 01:00:57.087896 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:00:57.087903 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:00:57.087909 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:00:57.087915 | orchestrator | 2026-03-09 01:00:57.087921 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-09 01:00:57.087928 | orchestrator | Monday 09 March 2026 00:58:15 +0000 (0:00:03.035) 0:00:15.982 ********** 2026-03-09 01:00:57.087934 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:00:57.087940 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:00:57.087947 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:00:57.087953 | orchestrator | 2026-03-09 01:00:57.087959 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-03-09 01:00:57.087965 | orchestrator | Monday 09 March 2026 00:58:18 +0000 (0:00:02.649) 0:00:18.631 ********** 2026-03-09 01:00:57.087972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:00:57.087982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:00:57.087994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:00:57.088005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-09 01:00:57.088013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-09 01:00:57.088023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-09 01:00:57.088035 | orchestrator | 2026-03-09 01:00:57.088042 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-03-09 01:00:57.088048 | orchestrator | Monday 09 March 2026 00:58:20 +0000 (0:00:02.593) 0:00:21.224 ********** 2026-03-09 01:00:57.088055 | orchestrator | changed: [testbed-node-0] => { 2026-03-09 01:00:57.088061 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:00:57.088068 | orchestrator | } 2026-03-09 01:00:57.088074 | orchestrator | changed: [testbed-node-1] => { 2026-03-09 01:00:57.088081 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:00:57.088087 | orchestrator | } 2026-03-09 01:00:57.088093 | orchestrator | changed: [testbed-node-2] => { 2026-03-09 01:00:57.088100 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:00:57.088106 | orchestrator | } 2026-03-09 01:00:57.088112 | orchestrator | 2026-03-09 01:00:57.088118 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-09 01:00:57.088128 | orchestrator | Monday 09 March 2026 00:58:21 +0000 (0:00:00.382) 0:00:21.607 ********** 2026-03-09 01:00:57.088135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:00:57.088142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-09 01:00:57.088149 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:00:57.088159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:00:57.088174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-09 01:00:57.088182 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:00:57.088188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:00:57.088195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-09 01:00:57.088202 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:00:57.088209 | orchestrator | 2026-03-09 01:00:57.088215 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-09 01:00:57.088226 | orchestrator | Monday 09 March 2026 00:58:23 +0000 (0:00:01.768) 0:00:23.375 ********** 2026-03-09 01:00:57.088232 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:00:57.088238 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:00:57.088244 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:00:57.088251 | orchestrator | 2026-03-09 01:00:57.088257 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-09 01:00:57.088263 | orchestrator | Monday 09 March 2026 00:58:23 +0000 (0:00:00.335) 0:00:23.711 ********** 2026-03-09 01:00:57.088269 | orchestrator | 2026-03-09 01:00:57.088276 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-09 01:00:57.088285 | orchestrator | Monday 09 March 2026 00:58:23 +0000 (0:00:00.066) 0:00:23.777 ********** 2026-03-09 01:00:57.088292 | orchestrator | 2026-03-09 01:00:57.088298 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-09 01:00:57.088304 | orchestrator | Monday 09 March 2026 00:58:23 +0000 (0:00:00.073) 0:00:23.851 ********** 2026-03-09 01:00:57.088311 | orchestrator | 2026-03-09 01:00:57.088317 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-09 01:00:57.088323 | orchestrator | Monday 09 March 2026 00:58:23 +0000 (0:00:00.072) 0:00:23.923 ********** 2026-03-09 01:00:57.088329 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:00:57.088336 | orchestrator | 2026-03-09 01:00:57.088342 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-09 01:00:57.088348 | orchestrator | Monday 09 March 2026 00:58:23 +0000 (0:00:00.214) 0:00:24.138 ********** 2026-03-09 01:00:57.088354 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:00:57.088363 | orchestrator | 2026-03-09 01:00:57.088373 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-09 01:00:57.088384 | orchestrator | Monday 09 March 2026 00:58:24 +0000 (0:00:00.235) 0:00:24.374 ********** 2026-03-09 01:00:57.088394 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:00:57.088403 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:00:57.088412 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:00:57.088422 | orchestrator | 2026-03-09 01:00:57.088432 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-09 01:00:57.088442 | orchestrator | Monday 09 March 2026 00:59:25 +0000 (0:01:00.917) 0:01:25.292 ********** 2026-03-09 01:00:57.088452 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:00:57.088462 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:00:57.088471 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:00:57.088481 | orchestrator | 2026-03-09 01:00:57.088491 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-09 01:00:57.088501 | orchestrator | Monday 09 March 2026 01:00:39 +0000 (0:01:14.280) 0:02:39.572 ********** 2026-03-09 01:00:57.088517 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:00:57.088528 | orchestrator | 2026-03-09 01:00:57.088539 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-09 01:00:57.088549 | orchestrator | Monday 09 March 2026 01:00:39 +0000 (0:00:00.594) 0:02:40.166 ********** 2026-03-09 01:00:57.088560 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:00:57.088570 | orchestrator | 2026-03-09 01:00:57.088581 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-03-09 01:00:57.088591 | orchestrator | Monday 09 March 2026 01:00:42 +0000 (0:00:03.031) 0:02:43.198 ********** 2026-03-09 01:00:57.088602 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:00:57.088610 | orchestrator | 2026-03-09 01:00:57.088617 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-09 01:00:57.088623 | orchestrator | Monday 09 March 2026 01:00:45 +0000 (0:00:02.660) 0:02:45.858 ********** 2026-03-09 01:00:57.088629 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:00:57.088635 | orchestrator | 2026-03-09 01:00:57.088655 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-09 01:00:57.088669 | orchestrator | Monday 09 March 2026 01:00:48 +0000 (0:00:03.216) 0:02:49.074 ********** 2026-03-09 01:00:57.088675 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:00:57.088681 | orchestrator | 2026-03-09 01:00:57.088688 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-09 01:00:57.088694 | orchestrator | Monday 09 March 2026 01:00:52 +0000 (0:00:03.287) 0:02:52.362 ********** 2026-03-09 01:00:57.088700 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:00:57.088706 | orchestrator | 2026-03-09 01:00:57.088712 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:00:57.088720 | orchestrator | testbed-node-0 : ok=20  changed=12  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-09 01:00:57.088727 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-09 01:00:57.088733 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-09 01:00:57.088739 | orchestrator | 2026-03-09 01:00:57.088745 | orchestrator | 2026-03-09 01:00:57.088752 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:00:57.088758 | orchestrator | Monday 09 March 2026 01:00:54 +0000 (0:00:02.807) 0:02:55.169 ********** 2026-03-09 01:00:57.088764 | orchestrator | =============================================================================== 2026-03-09 01:00:57.088770 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 74.28s 2026-03-09 01:00:57.088776 | orchestrator | opensearch : Restart opensearch container ------------------------------ 60.92s 2026-03-09 01:00:57.088783 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.29s 2026-03-09 01:00:57.088789 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 3.22s 2026-03-09 01:00:57.088795 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.04s 2026-03-09 01:00:57.088801 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 3.03s 2026-03-09 01:00:57.088807 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.81s 2026-03-09 01:00:57.088813 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.78s 2026-03-09 01:00:57.088820 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 2.66s 2026-03-09 01:00:57.088826 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.65s 2026-03-09 01:00:57.088836 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.63s 2026-03-09 01:00:57.088843 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 2.59s 2026-03-09 01:00:57.088849 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.92s 2026-03-09 01:00:57.088855 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.77s 2026-03-09 01:00:57.088861 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.45s 2026-03-09 01:00:57.088867 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.25s 2026-03-09 01:00:57.088873 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.72s 2026-03-09 01:00:57.088880 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.61s 2026-03-09 01:00:57.088886 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.59s 2026-03-09 01:00:57.088892 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.56s 2026-03-09 01:00:57.088898 | orchestrator | 2026-03-09 01:00:57 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 01:00:57.088904 | orchestrator | 2026-03-09 01:00:57 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:00.153620 | orchestrator | 2026-03-09 01:01:00 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:01:00.157249 | orchestrator | 2026-03-09 01:01:00 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 01:01:00.157416 | orchestrator | 2026-03-09 01:01:00 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:03.204000 | orchestrator | 2026-03-09 01:01:03 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:01:03.206223 | orchestrator | 2026-03-09 01:01:03 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 01:01:03.206573 | orchestrator | 2026-03-09 01:01:03 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:06.256075 | orchestrator | 2026-03-09 01:01:06 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:01:06.260099 | orchestrator | 2026-03-09 01:01:06 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 01:01:06.260161 | orchestrator | 2026-03-09 01:01:06 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:09.308401 | orchestrator | 2026-03-09 01:01:09 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:01:09.311034 | orchestrator | 2026-03-09 01:01:09 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 01:01:09.311119 | orchestrator | 2026-03-09 01:01:09 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:12.358349 | orchestrator | 2026-03-09 01:01:12 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:01:12.360867 | orchestrator | 2026-03-09 01:01:12 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 01:01:12.360911 | orchestrator | 2026-03-09 01:01:12 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:15.404769 | orchestrator | 2026-03-09 01:01:15 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:01:15.405846 | orchestrator | 2026-03-09 01:01:15 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 01:01:15.406073 | orchestrator | 2026-03-09 01:01:15 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:18.454846 | orchestrator | 2026-03-09 01:01:18 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:01:18.457329 | orchestrator | 2026-03-09 01:01:18 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 01:01:18.457376 | orchestrator | 2026-03-09 01:01:18 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:21.504015 | orchestrator | 2026-03-09 01:01:21 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:01:21.504254 | orchestrator | 2026-03-09 01:01:21 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 01:01:21.504543 | orchestrator | 2026-03-09 01:01:21 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:24.549031 | orchestrator | 2026-03-09 01:01:24 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:01:24.550827 | orchestrator | 2026-03-09 01:01:24 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 01:01:24.550902 | orchestrator | 2026-03-09 01:01:24 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:27.610237 | orchestrator | 2026-03-09 01:01:27 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:01:27.612113 | orchestrator | 2026-03-09 01:01:27 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 01:01:27.612149 | orchestrator | 2026-03-09 01:01:27 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:30.663331 | orchestrator | 2026-03-09 01:01:30 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:01:30.669325 | orchestrator | 2026-03-09 01:01:30 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state STARTED 2026-03-09 01:01:30.669434 | orchestrator | 2026-03-09 01:01:30 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:33.728054 | orchestrator | 2026-03-09 01:01:33 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:01:33.730898 | orchestrator | 2026-03-09 01:01:33 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state STARTED 2026-03-09 01:01:33.732577 | orchestrator | 2026-03-09 01:01:33 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:01:33.738135 | orchestrator | 2026-03-09 01:01:33 | INFO  | Task 41275e6d-abf4-4d76-9a80-7e4818b850b5 is in state SUCCESS 2026-03-09 01:01:33.738225 | orchestrator | 2026-03-09 01:01:33.739756 | orchestrator | 2026-03-09 01:01:33.739798 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-03-09 01:01:33.739811 | orchestrator | 2026-03-09 01:01:33.739822 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-09 01:01:33.739834 | orchestrator | Monday 09 March 2026 00:57:59 +0000 (0:00:00.109) 0:00:00.109 ********** 2026-03-09 01:01:33.739846 | orchestrator | ok: [localhost] => { 2026-03-09 01:01:33.739858 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-03-09 01:01:33.739869 | orchestrator | } 2026-03-09 01:01:33.739880 | orchestrator | 2026-03-09 01:01:33.739891 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-03-09 01:01:33.739902 | orchestrator | Monday 09 March 2026 00:57:59 +0000 (0:00:00.064) 0:00:00.173 ********** 2026-03-09 01:01:33.739914 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-03-09 01:01:33.739926 | orchestrator | ...ignoring 2026-03-09 01:01:33.739938 | orchestrator | 2026-03-09 01:01:33.739949 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-03-09 01:01:33.739960 | orchestrator | Monday 09 March 2026 00:58:02 +0000 (0:00:02.998) 0:00:03.172 ********** 2026-03-09 01:01:33.739971 | orchestrator | skipping: [localhost] 2026-03-09 01:01:33.739982 | orchestrator | 2026-03-09 01:01:33.739992 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-03-09 01:01:33.740003 | orchestrator | Monday 09 March 2026 00:58:02 +0000 (0:00:00.060) 0:00:03.232 ********** 2026-03-09 01:01:33.740014 | orchestrator | ok: [localhost] 2026-03-09 01:01:33.740025 | orchestrator | 2026-03-09 01:01:33.740036 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:01:33.740047 | orchestrator | 2026-03-09 01:01:33.740058 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:01:33.740069 | orchestrator | Monday 09 March 2026 00:58:03 +0000 (0:00:00.187) 0:00:03.420 ********** 2026-03-09 01:01:33.740080 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:33.740699 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:33.740752 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:33.740765 | orchestrator | 2026-03-09 01:01:33.740777 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:01:33.740788 | orchestrator | Monday 09 March 2026 00:58:03 +0000 (0:00:00.344) 0:00:03.764 ********** 2026-03-09 01:01:33.740799 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-09 01:01:33.740810 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-09 01:01:33.740821 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-09 01:01:33.740832 | orchestrator | 2026-03-09 01:01:33.740842 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-09 01:01:33.740882 | orchestrator | 2026-03-09 01:01:33.740894 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-09 01:01:33.740904 | orchestrator | Monday 09 March 2026 00:58:04 +0000 (0:00:00.675) 0:00:04.440 ********** 2026-03-09 01:01:33.740915 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-09 01:01:33.740927 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-09 01:01:33.740938 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-09 01:01:33.740949 | orchestrator | 2026-03-09 01:01:33.740960 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-09 01:01:33.740971 | orchestrator | Monday 09 March 2026 00:58:04 +0000 (0:00:00.412) 0:00:04.852 ********** 2026-03-09 01:01:33.740983 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:01:33.741005 | orchestrator | 2026-03-09 01:01:33.741036 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-09 01:01:33.741055 | orchestrator | Monday 09 March 2026 00:58:05 +0000 (0:00:00.587) 0:00:05.440 ********** 2026-03-09 01:01:33.741171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-09 01:01:33.741205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-09 01:01:33.741252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-09 01:01:33.741282 | orchestrator | 2026-03-09 01:01:33.741362 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-09 01:01:33.741384 | orchestrator | Monday 09 March 2026 00:58:07 +0000 (0:00:02.859) 0:00:08.300 ********** 2026-03-09 01:01:33.741402 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.741415 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:33.741428 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.741441 | orchestrator | 2026-03-09 01:01:33.741453 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-09 01:01:33.741467 | orchestrator | Monday 09 March 2026 00:58:08 +0000 (0:00:00.792) 0:00:09.092 ********** 2026-03-09 01:01:33.741480 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.741494 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.741507 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:33.741520 | orchestrator | 2026-03-09 01:01:33.741534 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-09 01:01:33.741546 | orchestrator | Monday 09 March 2026 00:58:10 +0000 (0:00:01.693) 0:00:10.785 ********** 2026-03-09 01:01:33.741563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-09 01:01:33.741624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-09 01:01:33.741655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-09 01:01:33.741685 | orchestrator | 2026-03-09 01:01:33.741703 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-09 01:01:33.741746 | orchestrator | Monday 09 March 2026 00:58:14 +0000 (0:00:03.650) 0:00:14.436 ********** 2026-03-09 01:01:33.741764 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.741780 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.741798 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:33.741817 | orchestrator | 2026-03-09 01:01:33.741836 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-09 01:01:33.741854 | orchestrator | Monday 09 March 2026 00:58:15 +0000 (0:00:01.388) 0:00:15.824 ********** 2026-03-09 01:01:33.741945 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:33.741957 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:01:33.741968 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:01:33.741979 | orchestrator | 2026-03-09 01:01:33.741990 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-09 01:01:33.742001 | orchestrator | Monday 09 March 2026 00:58:21 +0000 (0:00:05.638) 0:00:21.463 ********** 2026-03-09 01:01:33.742071 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:01:33.742087 | orchestrator | 2026-03-09 01:01:33.742098 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-09 01:01:33.742109 | orchestrator | Monday 09 March 2026 00:58:21 +0000 (0:00:00.649) 0:00:22.113 ********** 2026-03-09 01:01:33.742158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 01:01:33.742182 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.742195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 01:01:33.742207 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:33.742232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 01:01:33.742260 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.742289 | orchestrator | 2026-03-09 01:01:33.742308 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-09 01:01:33.742326 | orchestrator | Monday 09 March 2026 00:58:24 +0000 (0:00:02.978) 0:00:25.091 ********** 2026-03-09 01:01:33.742346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 01:01:33.742367 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.742406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 01:01:33.742432 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.742445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 01:01:33.742457 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:33.742468 | orchestrator | 2026-03-09 01:01:33.742479 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-09 01:01:33.742490 | orchestrator | Monday 09 March 2026 00:58:28 +0000 (0:00:04.164) 0:00:29.256 ********** 2026-03-09 01:01:33.742506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 01:01:33.742525 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:33.742546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 01:01:33.742559 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.742576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 01:01:33.742589 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.742600 | orchestrator | 2026-03-09 01:01:33.742610 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-03-09 01:01:33.742627 | orchestrator | Monday 09 March 2026 00:58:33 +0000 (0:00:04.257) 0:00:33.514 ********** 2026-03-09 01:01:33.742649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-09 01:01:33.742667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-09 01:01:33.742689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-09 01:01:33.742715 | orchestrator | 2026-03-09 01:01:33.742760 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-03-09 01:01:33.742772 | orchestrator | Monday 09 March 2026 00:58:37 +0000 (0:00:04.321) 0:00:37.836 ********** 2026-03-09 01:01:33.742783 | orchestrator | changed: [testbed-node-0] => { 2026-03-09 01:01:33.742794 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:01:33.742805 | orchestrator | } 2026-03-09 01:01:33.742817 | orchestrator | changed: [testbed-node-1] => { 2026-03-09 01:01:33.742827 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:01:33.742838 | orchestrator | } 2026-03-09 01:01:33.742849 | orchestrator | changed: [testbed-node-2] => { 2026-03-09 01:01:33.742860 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:01:33.742870 | orchestrator | } 2026-03-09 01:01:33.742881 | orchestrator | 2026-03-09 01:01:33.742892 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-09 01:01:33.742903 | orchestrator | Monday 09 March 2026 00:58:38 +0000 (0:00:00.574) 0:00:38.410 ********** 2026-03-09 01:01:33.742920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 01:01:33.742940 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.742961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 01:01:33.742973 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.742990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 01:01:33.743008 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:33.743019 | orchestrator | 2026-03-09 01:01:33.743030 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-03-09 01:01:33.743041 | orchestrator | Monday 09 March 2026 00:58:40 +0000 (0:00:02.810) 0:00:41.221 ********** 2026-03-09 01:01:33.743052 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:33.743063 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.743073 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.743084 | orchestrator | 2026-03-09 01:01:33.743095 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-03-09 01:01:33.743106 | orchestrator | Monday 09 March 2026 00:58:41 +0000 (0:00:00.279) 0:00:41.500 ********** 2026-03-09 01:01:33.743117 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:33.743128 | orchestrator | 2026-03-09 01:01:33.743138 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-03-09 01:01:33.743149 | orchestrator | Monday 09 March 2026 00:58:41 +0000 (0:00:00.151) 0:00:41.651 ********** 2026-03-09 01:01:33.743160 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:33.743171 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.743181 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.743192 | orchestrator | 2026-03-09 01:01:33.743203 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-03-09 01:01:33.743214 | orchestrator | Monday 09 March 2026 00:58:41 +0000 (0:00:00.552) 0:00:42.203 ********** 2026-03-09 01:01:33.743231 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:33.743246 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.743265 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.743282 | orchestrator | 2026-03-09 01:01:33.743300 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-03-09 01:01:33.743319 | orchestrator | Monday 09 March 2026 00:58:42 +0000 (0:00:00.382) 0:00:42.586 ********** 2026-03-09 01:01:33.743337 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:33.743355 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.743369 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.743380 | orchestrator | 2026-03-09 01:01:33.743390 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-03-09 01:01:33.743401 | orchestrator | Monday 09 March 2026 00:58:42 +0000 (0:00:00.346) 0:00:42.933 ********** 2026-03-09 01:01:33.743412 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:33.743423 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.743434 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.743444 | orchestrator | 2026-03-09 01:01:33.743455 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-03-09 01:01:33.743466 | orchestrator | Monday 09 March 2026 00:58:42 +0000 (0:00:00.340) 0:00:43.273 ********** 2026-03-09 01:01:33.743476 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:33.743487 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.743498 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.743509 | orchestrator | 2026-03-09 01:01:33.743519 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-03-09 01:01:33.743530 | orchestrator | Monday 09 March 2026 00:58:43 +0000 (0:00:00.430) 0:00:43.704 ********** 2026-03-09 01:01:33.743541 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:33.743552 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.743562 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.743573 | orchestrator | 2026-03-09 01:01:33.743584 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-03-09 01:01:33.743595 | orchestrator | Monday 09 March 2026 00:58:43 +0000 (0:00:00.290) 0:00:43.995 ********** 2026-03-09 01:01:33.743606 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-09 01:01:33.743617 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-09 01:01:33.743635 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-09 01:01:33.743646 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-09 01:01:33.743656 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-09 01:01:33.743667 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-09 01:01:33.743678 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:33.743689 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.743699 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-09 01:01:33.743710 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-09 01:01:33.743744 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-09 01:01:33.743756 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.743770 | orchestrator | 2026-03-09 01:01:33.743788 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-03-09 01:01:33.743809 | orchestrator | Monday 09 March 2026 00:58:43 +0000 (0:00:00.336) 0:00:44.331 ********** 2026-03-09 01:01:33.743836 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:33.743854 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.743872 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.743889 | orchestrator | 2026-03-09 01:01:33.743907 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-03-09 01:01:33.743926 | orchestrator | Monday 09 March 2026 00:58:44 +0000 (0:00:00.323) 0:00:44.655 ********** 2026-03-09 01:01:33.743948 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:33.743965 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.743981 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.743992 | orchestrator | 2026-03-09 01:01:33.744003 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-03-09 01:01:33.744014 | orchestrator | Monday 09 March 2026 00:58:44 +0000 (0:00:00.551) 0:00:45.207 ********** 2026-03-09 01:01:33.744025 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:33.744035 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.744048 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.744067 | orchestrator | 2026-03-09 01:01:33.744084 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-03-09 01:01:33.744102 | orchestrator | Monday 09 March 2026 00:58:45 +0000 (0:00:00.357) 0:00:45.564 ********** 2026-03-09 01:01:33.744119 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:33.744137 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.744156 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.744175 | orchestrator | 2026-03-09 01:01:33.744191 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-03-09 01:01:33.744203 | orchestrator | Monday 09 March 2026 00:58:45 +0000 (0:00:00.350) 0:00:45.915 ********** 2026-03-09 01:01:33.744214 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:33.744224 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.744235 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.744252 | orchestrator | 2026-03-09 01:01:33.744278 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-03-09 01:01:33.744300 | orchestrator | Monday 09 March 2026 00:58:45 +0000 (0:00:00.351) 0:00:46.266 ********** 2026-03-09 01:01:33.744318 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:33.744334 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.744352 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.744371 | orchestrator | 2026-03-09 01:01:33.744389 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-03-09 01:01:33.744447 | orchestrator | Monday 09 March 2026 00:58:46 +0000 (0:00:00.523) 0:00:46.790 ********** 2026-03-09 01:01:33.744461 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:33.744472 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.744483 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.744494 | orchestrator | 2026-03-09 01:01:33.744506 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-03-09 01:01:33.744538 | orchestrator | Monday 09 March 2026 00:58:46 +0000 (0:00:00.329) 0:00:47.119 ********** 2026-03-09 01:01:33.744549 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:33.744560 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.744571 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.744582 | orchestrator | 2026-03-09 01:01:33.744593 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-03-09 01:01:33.744604 | orchestrator | Monday 09 March 2026 00:58:47 +0000 (0:00:00.336) 0:00:47.455 ********** 2026-03-09 01:01:33.744617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 01:01:33.744630 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:33.744647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 01:01:33.744666 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.744686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 01:01:33.744698 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.744709 | orchestrator | 2026-03-09 01:01:33.744758 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-03-09 01:01:33.744771 | orchestrator | Monday 09 March 2026 00:58:49 +0000 (0:00:02.520) 0:00:49.975 ********** 2026-03-09 01:01:33.744782 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:33.744793 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.744804 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.744815 | orchestrator | 2026-03-09 01:01:33.744826 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-03-09 01:01:33.744836 | orchestrator | Monday 09 March 2026 00:58:49 +0000 (0:00:00.282) 0:00:50.258 ********** 2026-03-09 01:01:33.744854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 01:01:33.744874 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.744895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 01:01:33.744907 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:33.744924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 01:01:33.744942 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.744953 | orchestrator | 2026-03-09 01:01:33.744964 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-03-09 01:01:33.744975 | orchestrator | Monday 09 March 2026 00:58:52 +0000 (0:00:02.593) 0:00:52.852 ********** 2026-03-09 01:01:33.744986 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:33.744996 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.745007 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.745018 | orchestrator | 2026-03-09 01:01:33.745029 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-03-09 01:01:33.745045 | orchestrator | Monday 09 March 2026 00:58:52 +0000 (0:00:00.306) 0:00:53.159 ********** 2026-03-09 01:01:33.745056 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:33.745067 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.745078 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.745088 | orchestrator | 2026-03-09 01:01:33.745099 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-03-09 01:01:33.745110 | orchestrator | Monday 09 March 2026 00:58:53 +0000 (0:00:00.290) 0:00:53.449 ********** 2026-03-09 01:01:33.745121 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:33.745132 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.745142 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.745153 | orchestrator | 2026-03-09 01:01:33.745168 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-03-09 01:01:33.745191 | orchestrator | Monday 09 March 2026 00:58:53 +0000 (0:00:00.305) 0:00:53.755 ********** 2026-03-09 01:01:33.745218 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:33.745236 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.745255 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.745293 | orchestrator | 2026-03-09 01:01:33.745312 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-09 01:01:33.745330 | orchestrator | Monday 09 March 2026 00:58:54 +0000 (0:00:00.646) 0:00:54.401 ********** 2026-03-09 01:01:33.745348 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:33.745367 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.745386 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.745404 | orchestrator | 2026-03-09 01:01:33.745422 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-09 01:01:33.745442 | orchestrator | Monday 09 March 2026 00:58:54 +0000 (0:00:00.315) 0:00:54.717 ********** 2026-03-09 01:01:33.745454 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:33.745465 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:01:33.745476 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:01:33.745487 | orchestrator | 2026-03-09 01:01:33.745498 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-09 01:01:33.745509 | orchestrator | Monday 09 March 2026 00:58:55 +0000 (0:00:00.962) 0:00:55.679 ********** 2026-03-09 01:01:33.745519 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:33.745530 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:33.745541 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:33.745552 | orchestrator | 2026-03-09 01:01:33.745563 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-09 01:01:33.745574 | orchestrator | Monday 09 March 2026 00:58:55 +0000 (0:00:00.458) 0:00:56.138 ********** 2026-03-09 01:01:33.745584 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:33.745595 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:33.745606 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:33.745628 | orchestrator | 2026-03-09 01:01:33.745639 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-09 01:01:33.745650 | orchestrator | Monday 09 March 2026 00:58:56 +0000 (0:00:00.292) 0:00:56.430 ********** 2026-03-09 01:01:33.745662 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-03-09 01:01:33.745674 | orchestrator | ...ignoring 2026-03-09 01:01:33.745686 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-03-09 01:01:33.745697 | orchestrator | ...ignoring 2026-03-09 01:01:33.745708 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-03-09 01:01:33.745775 | orchestrator | ...ignoring 2026-03-09 01:01:33.745788 | orchestrator | 2026-03-09 01:01:33.745800 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-09 01:01:33.745811 | orchestrator | Monday 09 March 2026 00:59:06 +0000 (0:00:10.737) 0:01:07.167 ********** 2026-03-09 01:01:33.745821 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:33.745832 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:33.745843 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:33.745854 | orchestrator | 2026-03-09 01:01:33.745865 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-09 01:01:33.745876 | orchestrator | Monday 09 March 2026 00:59:07 +0000 (0:00:00.380) 0:01:07.548 ********** 2026-03-09 01:01:33.745887 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:33.745897 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.745908 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.745919 | orchestrator | 2026-03-09 01:01:33.745939 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-09 01:01:33.745949 | orchestrator | Monday 09 March 2026 00:59:07 +0000 (0:00:00.566) 0:01:08.114 ********** 2026-03-09 01:01:33.745959 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:33.745968 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.745978 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.745987 | orchestrator | 2026-03-09 01:01:33.745997 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-09 01:01:33.746006 | orchestrator | Monday 09 March 2026 00:59:08 +0000 (0:00:00.355) 0:01:08.470 ********** 2026-03-09 01:01:33.746064 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:33.746076 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.746086 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.746096 | orchestrator | 2026-03-09 01:01:33.746105 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-09 01:01:33.746115 | orchestrator | Monday 09 March 2026 00:59:08 +0000 (0:00:00.363) 0:01:08.833 ********** 2026-03-09 01:01:33.746125 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:33.746135 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:33.746144 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:33.746154 | orchestrator | 2026-03-09 01:01:33.746164 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-09 01:01:33.746173 | orchestrator | Monday 09 March 2026 00:59:08 +0000 (0:00:00.360) 0:01:09.193 ********** 2026-03-09 01:01:33.746183 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:33.746202 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.746211 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.746221 | orchestrator | 2026-03-09 01:01:33.746231 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-09 01:01:33.746240 | orchestrator | Monday 09 March 2026 00:59:09 +0000 (0:00:00.586) 0:01:09.780 ********** 2026-03-09 01:01:33.746250 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.746260 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.746269 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-03-09 01:01:33.746286 | orchestrator | 2026-03-09 01:01:33.746296 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-03-09 01:01:33.746305 | orchestrator | Monday 09 March 2026 00:59:09 +0000 (0:00:00.405) 0:01:10.186 ********** 2026-03-09 01:01:33.746315 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:33.746325 | orchestrator | 2026-03-09 01:01:33.746334 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-03-09 01:01:33.746344 | orchestrator | Monday 09 March 2026 00:59:20 +0000 (0:00:10.999) 0:01:21.185 ********** 2026-03-09 01:01:33.746353 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:33.746363 | orchestrator | 2026-03-09 01:01:33.746372 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-09 01:01:33.746382 | orchestrator | Monday 09 March 2026 00:59:20 +0000 (0:00:00.140) 0:01:21.326 ********** 2026-03-09 01:01:33.746391 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:33.746401 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.746410 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.746420 | orchestrator | 2026-03-09 01:01:33.746430 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-03-09 01:01:33.746439 | orchestrator | Monday 09 March 2026 00:59:21 +0000 (0:00:00.973) 0:01:22.300 ********** 2026-03-09 01:01:33.746449 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:33.746458 | orchestrator | 2026-03-09 01:01:33.746468 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-03-09 01:01:33.746478 | orchestrator | Monday 09 March 2026 00:59:30 +0000 (0:00:08.768) 0:01:31.069 ********** 2026-03-09 01:01:33.746487 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:33.746497 | orchestrator | 2026-03-09 01:01:33.746507 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-03-09 01:01:33.746516 | orchestrator | Monday 09 March 2026 00:59:32 +0000 (0:00:01.659) 0:01:32.728 ********** 2026-03-09 01:01:33.746526 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:33.746535 | orchestrator | 2026-03-09 01:01:33.746545 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-03-09 01:01:33.746555 | orchestrator | Monday 09 March 2026 00:59:34 +0000 (0:00:02.520) 0:01:35.248 ********** 2026-03-09 01:01:33.746564 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:33.746574 | orchestrator | 2026-03-09 01:01:33.746583 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-09 01:01:33.746593 | orchestrator | Monday 09 March 2026 00:59:35 +0000 (0:00:00.140) 0:01:35.388 ********** 2026-03-09 01:01:33.746602 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:33.746612 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.746622 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.746631 | orchestrator | 2026-03-09 01:01:33.746641 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-09 01:01:33.746651 | orchestrator | Monday 09 March 2026 00:59:35 +0000 (0:00:00.344) 0:01:35.733 ********** 2026-03-09 01:01:33.746660 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:33.746670 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:01:33.746679 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:01:33.746689 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-09 01:01:33.746699 | orchestrator | 2026-03-09 01:01:33.746708 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-09 01:01:33.746732 | orchestrator | skipping: no hosts matched 2026-03-09 01:01:33.746743 | orchestrator | 2026-03-09 01:01:33.746752 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-09 01:01:33.746762 | orchestrator | 2026-03-09 01:01:33.746772 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-09 01:01:33.746781 | orchestrator | Monday 09 March 2026 00:59:36 +0000 (0:00:00.640) 0:01:36.373 ********** 2026-03-09 01:01:33.746791 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:01:33.746800 | orchestrator | 2026-03-09 01:01:33.746816 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-09 01:01:33.746826 | orchestrator | Monday 09 March 2026 00:59:58 +0000 (0:00:22.789) 0:01:59.163 ********** 2026-03-09 01:01:33.746841 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:33.746851 | orchestrator | 2026-03-09 01:01:33.746860 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-09 01:01:33.746870 | orchestrator | Monday 09 March 2026 01:00:09 +0000 (0:00:10.599) 0:02:09.762 ********** 2026-03-09 01:01:33.746880 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:33.746889 | orchestrator | 2026-03-09 01:01:33.746899 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-09 01:01:33.746908 | orchestrator | 2026-03-09 01:01:33.746918 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-09 01:01:33.746928 | orchestrator | Monday 09 March 2026 01:00:11 +0000 (0:00:02.234) 0:02:11.996 ********** 2026-03-09 01:01:33.746937 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:01:33.746947 | orchestrator | 2026-03-09 01:01:33.746956 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-09 01:01:33.746966 | orchestrator | Monday 09 March 2026 01:00:31 +0000 (0:00:19.423) 0:02:31.421 ********** 2026-03-09 01:01:33.746975 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:33.746985 | orchestrator | 2026-03-09 01:01:33.746995 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-09 01:01:33.747004 | orchestrator | Monday 09 March 2026 01:00:46 +0000 (0:00:15.855) 0:02:47.276 ********** 2026-03-09 01:01:33.747014 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:33.747023 | orchestrator | 2026-03-09 01:01:33.747033 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-09 01:01:33.747042 | orchestrator | 2026-03-09 01:01:33.747057 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-09 01:01:33.747067 | orchestrator | Monday 09 March 2026 01:00:49 +0000 (0:00:02.562) 0:02:49.839 ********** 2026-03-09 01:01:33.747077 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:33.747087 | orchestrator | 2026-03-09 01:01:33.747096 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-09 01:01:33.747106 | orchestrator | Monday 09 March 2026 01:01:02 +0000 (0:00:12.788) 0:03:02.627 ********** 2026-03-09 01:01:33.747115 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:33.747125 | orchestrator | 2026-03-09 01:01:33.747134 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-09 01:01:33.747144 | orchestrator | Monday 09 March 2026 01:01:06 +0000 (0:00:04.651) 0:03:07.279 ********** 2026-03-09 01:01:33.747154 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:33.747163 | orchestrator | 2026-03-09 01:01:33.747173 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-09 01:01:33.747182 | orchestrator | 2026-03-09 01:01:33.747192 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-09 01:01:33.747201 | orchestrator | Monday 09 March 2026 01:01:09 +0000 (0:00:02.562) 0:03:09.841 ********** 2026-03-09 01:01:33.747211 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:01:33.747220 | orchestrator | 2026-03-09 01:01:33.747230 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-09 01:01:33.747240 | orchestrator | Monday 09 March 2026 01:01:10 +0000 (0:00:00.595) 0:03:10.436 ********** 2026-03-09 01:01:33.747249 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.747259 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.747268 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:33.747278 | orchestrator | 2026-03-09 01:01:33.747288 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-09 01:01:33.747297 | orchestrator | Monday 09 March 2026 01:01:12 +0000 (0:00:02.367) 0:03:12.804 ********** 2026-03-09 01:01:33.747307 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.747317 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.747332 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:33.747341 | orchestrator | 2026-03-09 01:01:33.747351 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-09 01:01:33.747361 | orchestrator | Monday 09 March 2026 01:01:14 +0000 (0:00:02.552) 0:03:15.356 ********** 2026-03-09 01:01:33.747370 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.747380 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.747390 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:33.747399 | orchestrator | 2026-03-09 01:01:33.747409 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-09 01:01:33.747418 | orchestrator | Monday 09 March 2026 01:01:17 +0000 (0:00:02.414) 0:03:17.770 ********** 2026-03-09 01:01:33.747428 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.747438 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.747447 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:33.747457 | orchestrator | 2026-03-09 01:01:33.747467 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-03-09 01:01:33.747476 | orchestrator | Monday 09 March 2026 01:01:19 +0000 (0:00:02.378) 0:03:20.149 ********** 2026-03-09 01:01:33.747486 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:33.747496 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:33.747511 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:33.747528 | orchestrator | 2026-03-09 01:01:33.747545 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-03-09 01:01:33.747563 | orchestrator | Monday 09 March 2026 01:01:24 +0000 (0:00:04.848) 0:03:24.997 ********** 2026-03-09 01:01:33.747579 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:33.747596 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.747612 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.747627 | orchestrator | 2026-03-09 01:01:33.747644 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-03-09 01:01:33.747659 | orchestrator | Monday 09 March 2026 01:01:27 +0000 (0:00:02.628) 0:03:27.625 ********** 2026-03-09 01:01:33.747675 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:33.747689 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.747705 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.747797 | orchestrator | 2026-03-09 01:01:33.747815 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-09 01:01:33.747830 | orchestrator | Monday 09 March 2026 01:01:27 +0000 (0:00:00.581) 0:03:28.207 ********** 2026-03-09 01:01:33.747845 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:33.747860 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:33.747875 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:33.747892 | orchestrator | 2026-03-09 01:01:33.747915 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-09 01:01:33.747930 | orchestrator | Monday 09 March 2026 01:01:30 +0000 (0:00:02.704) 0:03:30.911 ********** 2026-03-09 01:01:33.747944 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:33.747958 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:33.747972 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:33.747986 | orchestrator | 2026-03-09 01:01:33.748000 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:01:33.748014 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-09 01:01:33.748029 | orchestrator | testbed-node-0 : ok=36  changed=17  unreachable=0 failed=0 skipped=39  rescued=0 ignored=1  2026-03-09 01:01:33.748046 | orchestrator | testbed-node-1 : ok=22  changed=8  unreachable=0 failed=0 skipped=45  rescued=0 ignored=1  2026-03-09 01:01:33.748060 | orchestrator | testbed-node-2 : ok=22  changed=8  unreachable=0 failed=0 skipped=45  rescued=0 ignored=1  2026-03-09 01:01:33.748085 | orchestrator | 2026-03-09 01:01:33.748100 | orchestrator | 2026-03-09 01:01:33.748124 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:01:33.748139 | orchestrator | Monday 09 March 2026 01:01:31 +0000 (0:00:00.500) 0:03:31.411 ********** 2026-03-09 01:01:33.748153 | orchestrator | =============================================================================== 2026-03-09 01:01:33.748167 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 42.21s 2026-03-09 01:01:33.748181 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 26.45s 2026-03-09 01:01:33.748195 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.79s 2026-03-09 01:01:33.748209 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 11.00s 2026-03-09 01:01:33.748223 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.74s 2026-03-09 01:01:33.748238 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.77s 2026-03-09 01:01:33.748252 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 5.64s 2026-03-09 01:01:33.748266 | orchestrator | service-check : mariadb | Get container facts --------------------------- 4.85s 2026-03-09 01:01:33.748280 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.80s 2026-03-09 01:01:33.748295 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.65s 2026-03-09 01:01:33.748309 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 4.32s 2026-03-09 01:01:33.748323 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 4.26s 2026-03-09 01:01:33.748337 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 4.16s 2026-03-09 01:01:33.748351 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.65s 2026-03-09 01:01:33.748366 | orchestrator | Check MariaDB service --------------------------------------------------- 3.00s 2026-03-09 01:01:33.748380 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.98s 2026-03-09 01:01:33.748394 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.86s 2026-03-09 01:01:33.748408 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.81s 2026-03-09 01:01:33.748422 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.70s 2026-03-09 01:01:33.748437 | orchestrator | service-check : mariadb | Fail if containers are missing or not running --- 2.63s 2026-03-09 01:01:33.748451 | orchestrator | 2026-03-09 01:01:33 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:36.796110 | orchestrator | 2026-03-09 01:01:36 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:01:36.797566 | orchestrator | 2026-03-09 01:01:36 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state STARTED 2026-03-09 01:01:36.799088 | orchestrator | 2026-03-09 01:01:36 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:01:36.799339 | orchestrator | 2026-03-09 01:01:36 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:39.849246 | orchestrator | 2026-03-09 01:01:39 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:01:39.849338 | orchestrator | 2026-03-09 01:01:39 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state STARTED 2026-03-09 01:01:39.850840 | orchestrator | 2026-03-09 01:01:39 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:01:39.850916 | orchestrator | 2026-03-09 01:01:39 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:42.894259 | orchestrator | 2026-03-09 01:01:42 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:01:42.895514 | orchestrator | 2026-03-09 01:01:42 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state STARTED 2026-03-09 01:01:42.896727 | orchestrator | 2026-03-09 01:01:42 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:01:42.896950 | orchestrator | 2026-03-09 01:01:42 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:45.945098 | orchestrator | 2026-03-09 01:01:45 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:01:45.945381 | orchestrator | 2026-03-09 01:01:45 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state STARTED 2026-03-09 01:01:45.946990 | orchestrator | 2026-03-09 01:01:45 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:01:45.947219 | orchestrator | 2026-03-09 01:01:45 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:48.985547 | orchestrator | 2026-03-09 01:01:48 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:01:48.985658 | orchestrator | 2026-03-09 01:01:48 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state STARTED 2026-03-09 01:01:48.990160 | orchestrator | 2026-03-09 01:01:48 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:01:48.990224 | orchestrator | 2026-03-09 01:01:48 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:52.050997 | orchestrator | 2026-03-09 01:01:52 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:01:52.056109 | orchestrator | 2026-03-09 01:01:52 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state STARTED 2026-03-09 01:01:52.064389 | orchestrator | 2026-03-09 01:01:52 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:01:52.064479 | orchestrator | 2026-03-09 01:01:52 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:55.096171 | orchestrator | 2026-03-09 01:01:55 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:01:55.097867 | orchestrator | 2026-03-09 01:01:55 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state STARTED 2026-03-09 01:01:55.097934 | orchestrator | 2026-03-09 01:01:55 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:01:55.097954 | orchestrator | 2026-03-09 01:01:55 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:58.188866 | orchestrator | 2026-03-09 01:01:58 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:01:58.190223 | orchestrator | 2026-03-09 01:01:58 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state STARTED 2026-03-09 01:01:58.191499 | orchestrator | 2026-03-09 01:01:58 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:01:58.191614 | orchestrator | 2026-03-09 01:01:58 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:01.241015 | orchestrator | 2026-03-09 01:02:01 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:02:01.242497 | orchestrator | 2026-03-09 01:02:01 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state STARTED 2026-03-09 01:02:01.244360 | orchestrator | 2026-03-09 01:02:01 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:02:01.246750 | orchestrator | 2026-03-09 01:02:01 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:04.287261 | orchestrator | 2026-03-09 01:02:04 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:02:04.289094 | orchestrator | 2026-03-09 01:02:04 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state STARTED 2026-03-09 01:02:04.290521 | orchestrator | 2026-03-09 01:02:04 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:02:04.290704 | orchestrator | 2026-03-09 01:02:04 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:07.348045 | orchestrator | 2026-03-09 01:02:07 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state STARTED 2026-03-09 01:02:07.349982 | orchestrator | 2026-03-09 01:02:07 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state STARTED 2026-03-09 01:02:07.350880 | orchestrator | 2026-03-09 01:02:07 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:02:07.352668 | orchestrator | 2026-03-09 01:02:07 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:10.392486 | orchestrator | 2026-03-09 01:02:10 | INFO  | Task d74c697a-bf50-40a5-8385-42fa432bf584 is in state SUCCESS 2026-03-09 01:02:10.394264 | orchestrator | 2026-03-09 01:02:10.394327 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-09 01:02:10.394342 | orchestrator | 2.16.14 2026-03-09 01:02:10.394356 | orchestrator | 2026-03-09 01:02:10.394381 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-03-09 01:02:10.394393 | orchestrator | 2026-03-09 01:02:10.394409 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-09 01:02:10.394428 | orchestrator | Monday 09 March 2026 00:59:53 +0000 (0:00:00.579) 0:00:00.579 ********** 2026-03-09 01:02:10.394452 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:02:10.394480 | orchestrator | 2026-03-09 01:02:10.394497 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-09 01:02:10.394515 | orchestrator | Monday 09 March 2026 00:59:53 +0000 (0:00:00.565) 0:00:01.145 ********** 2026-03-09 01:02:10.394534 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:02:10.394552 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:02:10.394570 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:02:10.395547 | orchestrator | 2026-03-09 01:02:10.395565 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-09 01:02:10.395577 | orchestrator | Monday 09 March 2026 00:59:54 +0000 (0:00:00.588) 0:00:01.734 ********** 2026-03-09 01:02:10.395589 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:02:10.395600 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:02:10.395611 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:02:10.395622 | orchestrator | 2026-03-09 01:02:10.395633 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-09 01:02:10.395644 | orchestrator | Monday 09 March 2026 00:59:54 +0000 (0:00:00.335) 0:00:02.070 ********** 2026-03-09 01:02:10.395656 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:02:10.395667 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:02:10.395678 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:02:10.395689 | orchestrator | 2026-03-09 01:02:10.395700 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-09 01:02:10.395711 | orchestrator | Monday 09 March 2026 00:59:55 +0000 (0:00:00.866) 0:00:02.937 ********** 2026-03-09 01:02:10.395783 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:02:10.395947 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:02:10.395963 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:02:10.395982 | orchestrator | 2026-03-09 01:02:10.396000 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-09 01:02:10.396018 | orchestrator | Monday 09 March 2026 00:59:55 +0000 (0:00:00.350) 0:00:03.288 ********** 2026-03-09 01:02:10.396036 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:02:10.396053 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:02:10.396071 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:02:10.396088 | orchestrator | 2026-03-09 01:02:10.396106 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-09 01:02:10.396125 | orchestrator | Monday 09 March 2026 00:59:56 +0000 (0:00:00.317) 0:00:03.605 ********** 2026-03-09 01:02:10.396176 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:02:10.396198 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:02:10.396219 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:02:10.396237 | orchestrator | 2026-03-09 01:02:10.396249 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-09 01:02:10.396261 | orchestrator | Monday 09 March 2026 00:59:56 +0000 (0:00:00.333) 0:00:03.938 ********** 2026-03-09 01:02:10.396272 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:02:10.396284 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:02:10.396294 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:02:10.396305 | orchestrator | 2026-03-09 01:02:10.396316 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-09 01:02:10.396330 | orchestrator | Monday 09 March 2026 00:59:57 +0000 (0:00:00.511) 0:00:04.449 ********** 2026-03-09 01:02:10.396353 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:02:10.396380 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:02:10.396398 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:02:10.396416 | orchestrator | 2026-03-09 01:02:10.396434 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-09 01:02:10.396451 | orchestrator | Monday 09 March 2026 00:59:57 +0000 (0:00:00.297) 0:00:04.747 ********** 2026-03-09 01:02:10.396469 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-09 01:02:10.396488 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-09 01:02:10.396508 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-09 01:02:10.396526 | orchestrator | 2026-03-09 01:02:10.396544 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-09 01:02:10.396563 | orchestrator | Monday 09 March 2026 00:59:58 +0000 (0:00:00.703) 0:00:05.450 ********** 2026-03-09 01:02:10.396582 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:02:10.396600 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:02:10.396618 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:02:10.396639 | orchestrator | 2026-03-09 01:02:10.396659 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-09 01:02:10.396680 | orchestrator | Monday 09 March 2026 00:59:58 +0000 (0:00:00.473) 0:00:05.924 ********** 2026-03-09 01:02:10.396698 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-09 01:02:10.396717 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-09 01:02:10.396736 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-09 01:02:10.396754 | orchestrator | 2026-03-09 01:02:10.396773 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-09 01:02:10.396792 | orchestrator | Monday 09 March 2026 01:00:00 +0000 (0:00:02.304) 0:00:08.228 ********** 2026-03-09 01:02:10.396957 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-09 01:02:10.396969 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-09 01:02:10.396980 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-09 01:02:10.397084 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:02:10.397102 | orchestrator | 2026-03-09 01:02:10.397190 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-09 01:02:10.397217 | orchestrator | Monday 09 March 2026 01:00:01 +0000 (0:00:00.684) 0:00:08.913 ********** 2026-03-09 01:02:10.397230 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.397244 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.397268 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.397280 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:02:10.397291 | orchestrator | 2026-03-09 01:02:10.397302 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-09 01:02:10.397312 | orchestrator | Monday 09 March 2026 01:00:02 +0000 (0:00:00.922) 0:00:09.836 ********** 2026-03-09 01:02:10.397325 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.397339 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.397351 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.397363 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:02:10.397374 | orchestrator | 2026-03-09 01:02:10.397385 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-09 01:02:10.397396 | orchestrator | Monday 09 March 2026 01:00:02 +0000 (0:00:00.434) 0:00:10.271 ********** 2026-03-09 01:02:10.397407 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'cdf0e5b7020c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-09 00:59:59.248652', 'end': '2026-03-09 00:59:59.292570', 'delta': '0:00:00.043918', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cdf0e5b7020c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-09 01:02:10.397420 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'b2c1e0ba4240', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-09 01:00:00.017878', 'end': '2026-03-09 01:00:00.053546', 'delta': '0:00:00.035668', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b2c1e0ba4240'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-09 01:02:10.397464 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '16b0dc52e825', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-09 01:00:00.668093', 'end': '2026-03-09 01:00:00.711093', 'delta': '0:00:00.043000', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['16b0dc52e825'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-09 01:02:10.397483 | orchestrator | 2026-03-09 01:02:10.397493 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-09 01:02:10.397503 | orchestrator | Monday 09 March 2026 01:00:03 +0000 (0:00:00.221) 0:00:10.492 ********** 2026-03-09 01:02:10.397513 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:02:10.397523 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:02:10.397533 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:02:10.397542 | orchestrator | 2026-03-09 01:02:10.397563 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-09 01:02:10.397581 | orchestrator | Monday 09 March 2026 01:00:03 +0000 (0:00:00.496) 0:00:10.989 ********** 2026-03-09 01:02:10.397591 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-09 01:02:10.397601 | orchestrator | 2026-03-09 01:02:10.397611 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-09 01:02:10.397620 | orchestrator | Monday 09 March 2026 01:00:05 +0000 (0:00:01.862) 0:00:12.852 ********** 2026-03-09 01:02:10.397630 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:02:10.397640 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:02:10.397649 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:02:10.397659 | orchestrator | 2026-03-09 01:02:10.397668 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-09 01:02:10.397678 | orchestrator | Monday 09 March 2026 01:00:05 +0000 (0:00:00.485) 0:00:13.337 ********** 2026-03-09 01:02:10.397687 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:02:10.397697 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:02:10.397707 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:02:10.397716 | orchestrator | 2026-03-09 01:02:10.397726 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-09 01:02:10.397736 | orchestrator | Monday 09 March 2026 01:00:06 +0000 (0:00:00.533) 0:00:13.871 ********** 2026-03-09 01:02:10.397745 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:02:10.397755 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:02:10.397764 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:02:10.397774 | orchestrator | 2026-03-09 01:02:10.397784 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-09 01:02:10.397845 | orchestrator | Monday 09 March 2026 01:00:07 +0000 (0:00:00.557) 0:00:14.429 ********** 2026-03-09 01:02:10.397864 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:02:10.397881 | orchestrator | 2026-03-09 01:02:10.397898 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-09 01:02:10.397909 | orchestrator | Monday 09 March 2026 01:00:07 +0000 (0:00:00.137) 0:00:14.566 ********** 2026-03-09 01:02:10.397918 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:02:10.397928 | orchestrator | 2026-03-09 01:02:10.397938 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-09 01:02:10.397948 | orchestrator | Monday 09 March 2026 01:00:07 +0000 (0:00:00.219) 0:00:14.786 ********** 2026-03-09 01:02:10.397957 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:02:10.397967 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:02:10.397976 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:02:10.397986 | orchestrator | 2026-03-09 01:02:10.397996 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-09 01:02:10.398005 | orchestrator | Monday 09 March 2026 01:00:07 +0000 (0:00:00.297) 0:00:15.083 ********** 2026-03-09 01:02:10.398057 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:02:10.398069 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:02:10.398086 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:02:10.398096 | orchestrator | 2026-03-09 01:02:10.398106 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-09 01:02:10.398116 | orchestrator | Monday 09 March 2026 01:00:08 +0000 (0:00:00.332) 0:00:15.415 ********** 2026-03-09 01:02:10.398125 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:02:10.398135 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:02:10.398145 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:02:10.398154 | orchestrator | 2026-03-09 01:02:10.398164 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-09 01:02:10.398173 | orchestrator | Monday 09 March 2026 01:00:08 +0000 (0:00:00.561) 0:00:15.977 ********** 2026-03-09 01:02:10.398183 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:02:10.398192 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:02:10.398202 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:02:10.398211 | orchestrator | 2026-03-09 01:02:10.398221 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-09 01:02:10.398231 | orchestrator | Monday 09 March 2026 01:00:08 +0000 (0:00:00.332) 0:00:16.310 ********** 2026-03-09 01:02:10.398240 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:02:10.398250 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:02:10.398259 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:02:10.398269 | orchestrator | 2026-03-09 01:02:10.398278 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-09 01:02:10.398288 | orchestrator | Monday 09 March 2026 01:00:09 +0000 (0:00:00.349) 0:00:16.659 ********** 2026-03-09 01:02:10.398298 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:02:10.398307 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:02:10.398317 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:02:10.398362 | orchestrator | 2026-03-09 01:02:10.398374 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-09 01:02:10.398389 | orchestrator | Monday 09 March 2026 01:00:09 +0000 (0:00:00.324) 0:00:16.983 ********** 2026-03-09 01:02:10.398399 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:02:10.398409 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:02:10.398420 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:02:10.398430 | orchestrator | 2026-03-09 01:02:10.398441 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-09 01:02:10.398452 | orchestrator | Monday 09 March 2026 01:00:10 +0000 (0:00:00.565) 0:00:17.549 ********** 2026-03-09 01:02:10.398464 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0b4a24c5--7164--5e55--92cc--433a48be10d0-osd--block--0b4a24c5--7164--5e55--92cc--433a48be10d0', 'dm-uuid-LVM-xoYiAr1LbGAgQx9YTSY4h87WEEAMBYG6KvCGKgRKiE7cyM04uk8bDW8y2n0svaKJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-09 01:02:10.398477 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--07cae8b8--d309--58e5--9f3f--3806cd3fe573-osd--block--07cae8b8--d309--58e5--9f3f--3806cd3fe573', 'dm-uuid-LVM-gl3VxdhyGcL39CYSAZ2UylTo0uqBhzMRbQXrveI7l53qqf8ztRDRHEHmQd5yahj6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-09 01:02:10.398489 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:02:10.398513 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:02:10.398525 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:02:10.398537 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:02:10.398548 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:02:10.398592 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:02:10.398618 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:02:10.398629 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:02:10.398645 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d', 'scsi-SQEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d-part1', 'scsi-SQEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d-part14', 'scsi-SQEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d-part15', 'scsi-SQEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d-part16', 'scsi-SQEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:02:10.398667 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--0b4a24c5--7164--5e55--92cc--433a48be10d0-osd--block--0b4a24c5--7164--5e55--92cc--433a48be10d0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7ZLXT4-E7kf-zLjW-diLI-wHLN-Z5Od-qwtJ62', 'scsi-0QEMU_QEMU_HARDDISK_a6833780-5d8c-49cb-baf4-596d7658d284', 'scsi-SQEMU_QEMU_HARDDISK_a6833780-5d8c-49cb-baf4-596d7658d284'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:02:10.398715 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--07cae8b8--d309--58e5--9f3f--3806cd3fe573-osd--block--07cae8b8--d309--58e5--9f3f--3806cd3fe573'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-22UVB5-Gz8Y-u89a-DzGO-vLep-gcHN-21CHr2', 'scsi-0QEMU_QEMU_HARDDISK_d782a267-8601-4e70-9eb9-845bf96c3393', 'scsi-SQEMU_QEMU_HARDDISK_d782a267-8601-4e70-9eb9-845bf96c3393'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:02:10.398729 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e401ede7-34f1-42e1-9654-8299af9dca9f', 'scsi-SQEMU_QEMU_HARDDISK_e401ede7-34f1-42e1-9654-8299af9dca9f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:02:10.398742 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-03-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:02:10.398760 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9c74837a--43e3--5ea9--9fe0--5cec11260b17-osd--block--9c74837a--43e3--5ea9--9fe0--5cec11260b17', 'dm-uuid-LVM-r6O3uel0WqqZv6vhGYFFKRbvfWkcwOjX1gmhQS9oeLec7ivOjKlRCcgI2KpJCYRg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-09 01:02:10.398772 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--590958f1--5006--5da8--896c--bdb08f0ac33f-osd--block--590958f1--5006--5da8--896c--bdb08f0ac33f', 'dm-uuid-LVM-GDcxOYRYMTfbdE6bm9RUedT2ja1WXcothVu0Q3hYuGWfxKTaMQ5s9URketbQftD2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-09 01:02:10.398783 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:02:10.398819 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:02:10.398882 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:02:10.398897 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:02:10.398908 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:02:10.398919 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:02:10.398937 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:02:10.398949 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:02:10.398960 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:02:10.398984 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07', 'scsi-SQEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07-part1', 'scsi-SQEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07-part14', 'scsi-SQEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07-part15', 'scsi-SQEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07-part16', 'scsi-SQEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:02:10.398998 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--9c74837a--43e3--5ea9--9fe0--5cec11260b17-osd--block--9c74837a--43e3--5ea9--9fe0--5cec11260b17'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-D6Q0C3-FUqs-T6yd-w7Jq-twLV-onDI-LnXz1U', 'scsi-0QEMU_QEMU_HARDDISK_bc061b31-9341-4fe1-bc4e-7c107d37f2f9', 'scsi-SQEMU_QEMU_HARDDISK_bc061b31-9341-4fe1-bc4e-7c107d37f2f9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:02:10.399010 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--590958f1--5006--5da8--896c--bdb08f0ac33f-osd--block--590958f1--5006--5da8--896c--bdb08f0ac33f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5GWMwc-VjMm-BxBU-2FIP-P70X-LgzN-b8AaYw', 'scsi-0QEMU_QEMU_HARDDISK_32378689-09a5-476b-b0b0-ef0e7774d8c3', 'scsi-SQEMU_QEMU_HARDDISK_32378689-09a5-476b-b0b0-ef0e7774d8c3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:02:10.399028 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96371732-37bf-4fbc-835d-bb1aff74906c', 'scsi-SQEMU_QEMU_HARDDISK_96371732-37bf-4fbc-835d-bb1aff74906c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:02:10.399040 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-03-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:02:10.399051 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e95d8336--562c--5e60--938c--e1db43f5f553-osd--block--e95d8336--562c--5e60--938c--e1db43f5f553', 'dm-uuid-LVM-ztfRVe47Oaz8Dx4feBZw1IAdMSfcHeyflLsgo48Fz0kcNSIrp8VYsCm7tSHUqDEd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-09 01:02:10.399062 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:02:10.399089 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c56389c1--f3b1--5ba6--b160--f425a16b3e47-osd--block--c56389c1--f3b1--5ba6--b160--f425a16b3e47', 'dm-uuid-LVM-lgVd3TGKAanyx1UuubDE8F4fOcWVj8DjuQV0cGgI4D2C5F0zBzfD0ig57Sb9wsbD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-09 01:02:10.399101 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:02:10.399113 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:02:10.399130 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:02:10.399142 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:02:10.399153 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:02:10.399164 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:02:10.399175 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:02:10.399187 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:02:10.399212 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847', 'scsi-SQEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847-part1', 'scsi-SQEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847-part14', 'scsi-SQEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847-part15', 'scsi-SQEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847-part16', 'scsi-SQEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:02:10.399231 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e95d8336--562c--5e60--938c--e1db43f5f553-osd--block--e95d8336--562c--5e60--938c--e1db43f5f553'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-u1mKP3-MJVB-fCwd-HeH7-ziOJ-ldBN-jXUfdI', 'scsi-0QEMU_QEMU_HARDDISK_af02e055-7e15-40a4-be69-d990d822f0ba', 'scsi-SQEMU_QEMU_HARDDISK_af02e055-7e15-40a4-be69-d990d822f0ba'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:02:10.399243 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c56389c1--f3b1--5ba6--b160--f425a16b3e47-osd--block--c56389c1--f3b1--5ba6--b160--f425a16b3e47'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-mbVuqY-9dCU-ISmZ-mZSm-7ebn-T3LB-YnmwYS', 'scsi-0QEMU_QEMU_HARDDISK_9db61a68-6a19-4ffe-9dc6-6109c8ad90ec', 'scsi-SQEMU_QEMU_HARDDISK_9db61a68-6a19-4ffe-9dc6-6109c8ad90ec'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:02:10.399255 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e6a3ca4-1946-4dac-9dc1-38bfb1214560', 'scsi-SQEMU_QEMU_HARDDISK_5e6a3ca4-1946-4dac-9dc1-38bfb1214560'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:02:10.399278 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-03-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:02:10.399290 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:02:10.399301 | orchestrator | 2026-03-09 01:02:10.399312 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-09 01:02:10.399323 | orchestrator | Monday 09 March 2026 01:00:10 +0000 (0:00:00.678) 0:00:18.227 ********** 2026-03-09 01:02:10.399340 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0b4a24c5--7164--5e55--92cc--433a48be10d0-osd--block--0b4a24c5--7164--5e55--92cc--433a48be10d0', 'dm-uuid-LVM-xoYiAr1LbGAgQx9YTSY4h87WEEAMBYG6KvCGKgRKiE7cyM04uk8bDW8y2n0svaKJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.399353 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--07cae8b8--d309--58e5--9f3f--3806cd3fe573-osd--block--07cae8b8--d309--58e5--9f3f--3806cd3fe573', 'dm-uuid-LVM-gl3VxdhyGcL39CYSAZ2UylTo0uqBhzMRbQXrveI7l53qqf8ztRDRHEHmQd5yahj6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.399364 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.399375 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.399387 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.399409 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.399431 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.399442 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.399453 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.399465 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9c74837a--43e3--5ea9--9fe0--5cec11260b17-osd--block--9c74837a--43e3--5ea9--9fe0--5cec11260b17', 'dm-uuid-LVM-r6O3uel0WqqZv6vhGYFFKRbvfWkcwOjX1gmhQS9oeLec7ivOjKlRCcgI2KpJCYRg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.399476 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.399499 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--590958f1--5006--5da8--896c--bdb08f0ac33f-osd--block--590958f1--5006--5da8--896c--bdb08f0ac33f', 'dm-uuid-LVM-GDcxOYRYMTfbdE6bm9RUedT2ja1WXcothVu0Q3hYuGWfxKTaMQ5s9URketbQftD2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.399518 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d', 'scsi-SQEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d-part1', 'scsi-SQEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d-part14', 'scsi-SQEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d-part15', 'scsi-SQEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d-part16', 'scsi-SQEMU_QEMU_HARDDISK_c47531ab-b779-461a-8b30-0be29ea5188d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.399531 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.399549 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--0b4a24c5--7164--5e55--92cc--433a48be10d0-osd--block--0b4a24c5--7164--5e55--92cc--433a48be10d0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7ZLXT4-E7kf-zLjW-diLI-wHLN-Z5Od-qwtJ62', 'scsi-0QEMU_QEMU_HARDDISK_a6833780-5d8c-49cb-baf4-596d7658d284', 'scsi-SQEMU_QEMU_HARDDISK_a6833780-5d8c-49cb-baf4-596d7658d284'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.399571 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.399583 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--07cae8b8--d309--58e5--9f3f--3806cd3fe573-osd--block--07cae8b8--d309--58e5--9f3f--3806cd3fe573'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-22UVB5-Gz8Y-u89a-DzGO-vLep-gcHN-21CHr2', 'scsi-0QEMU_QEMU_HARDDISK_d782a267-8601-4e70-9eb9-845bf96c3393', 'scsi-SQEMU_QEMU_HARDDISK_d782a267-8601-4e70-9eb9-845bf96c3393'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.399594 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e401ede7-34f1-42e1-9654-8299af9dca9f', 'scsi-SQEMU_QEMU_HARDDISK_e401ede7-34f1-42e1-9654-8299af9dca9f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.399606 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.399617 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-03-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.399629 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:02:10.399650 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.399668 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.399680 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.399691 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.399703 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.399728 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07', 'scsi-SQEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07-part1', 'scsi-SQEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07-part14', 'scsi-SQEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07-part15', 'scsi-SQEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07-part16', 'scsi-SQEMU_QEMU_HARDDISK_f4e4dbd9-9c57-4314-9e8e-bca4232cec07-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.399748 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--9c74837a--43e3--5ea9--9fe0--5cec11260b17-osd--block--9c74837a--43e3--5ea9--9fe0--5cec11260b17'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-D6Q0C3-FUqs-T6yd-w7Jq-twLV-onDI-LnXz1U', 'scsi-0QEMU_QEMU_HARDDISK_bc061b31-9341-4fe1-bc4e-7c107d37f2f9', 'scsi-SQEMU_QEMU_HARDDISK_bc061b31-9341-4fe1-bc4e-7c107d37f2f9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.399760 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e95d8336--562c--5e60--938c--e1db43f5f553-osd--block--e95d8336--562c--5e60--938c--e1db43f5f553', 'dm-uuid-LVM-ztfRVe47Oaz8Dx4feBZw1IAdMSfcHeyflLsgo48Fz0kcNSIrp8VYsCm7tSHUqDEd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.399771 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--590958f1--5006--5da8--896c--bdb08f0ac33f-osd--block--590958f1--5006--5da8--896c--bdb08f0ac33f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5GWMwc-VjMm-BxBU-2FIP-P70X-LgzN-b8AaYw', 'scsi-0QEMU_QEMU_HARDDISK_32378689-09a5-476b-b0b0-ef0e7774d8c3', 'scsi-SQEMU_QEMU_HARDDISK_32378689-09a5-476b-b0b0-ef0e7774d8c3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.399823 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c56389c1--f3b1--5ba6--b160--f425a16b3e47-osd--block--c56389c1--f3b1--5ba6--b160--f425a16b3e47', 'dm-uuid-LVM-lgVd3TGKAanyx1UuubDE8F4fOcWVj8DjuQV0cGgI4D2C5F0zBzfD0ig57Sb9wsbD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.399838 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96371732-37bf-4fbc-835d-bb1aff74906c', 'scsi-SQEMU_QEMU_HARDDISK_96371732-37bf-4fbc-835d-bb1aff74906c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.399849 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.399861 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-03-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.399872 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.399884 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:02:10.399895 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.399922 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.399935 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.399946 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.399957 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.399969 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.399993 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847', 'scsi-SQEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847-part1', 'scsi-SQEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847-part14', 'scsi-SQEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847-part15', 'scsi-SQEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847-part16', 'scsi-SQEMU_QEMU_HARDDISK_e15e63a5-d93c-4538-92f1-da1d17102847-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.400012 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e95d8336--562c--5e60--938c--e1db43f5f553-osd--block--e95d8336--562c--5e60--938c--e1db43f5f553'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-u1mKP3-MJVB-fCwd-HeH7-ziOJ-ldBN-jXUfdI', 'scsi-0QEMU_QEMU_HARDDISK_af02e055-7e15-40a4-be69-d990d822f0ba', 'scsi-SQEMU_QEMU_HARDDISK_af02e055-7e15-40a4-be69-d990d822f0ba'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.400024 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c56389c1--f3b1--5ba6--b160--f425a16b3e47-osd--block--c56389c1--f3b1--5ba6--b160--f425a16b3e47'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-mbVuqY-9dCU-ISmZ-mZSm-7ebn-T3LB-YnmwYS', 'scsi-0QEMU_QEMU_HARDDISK_9db61a68-6a19-4ffe-9dc6-6109c8ad90ec', 'scsi-SQEMU_QEMU_HARDDISK_9db61a68-6a19-4ffe-9dc6-6109c8ad90ec'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.400035 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e6a3ca4-1946-4dac-9dc1-38bfb1214560', 'scsi-SQEMU_QEMU_HARDDISK_5e6a3ca4-1946-4dac-9dc1-38bfb1214560'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.400065 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-03-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:02:10.400078 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:02:10.400089 | orchestrator | 2026-03-09 01:02:10.400100 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-09 01:02:10.400111 | orchestrator | Monday 09 March 2026 01:00:11 +0000 (0:00:00.672) 0:00:18.899 ********** 2026-03-09 01:02:10.400122 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:02:10.400133 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:02:10.400143 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:02:10.400154 | orchestrator | 2026-03-09 01:02:10.400165 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-09 01:02:10.400176 | orchestrator | Monday 09 March 2026 01:00:12 +0000 (0:00:00.746) 0:00:19.645 ********** 2026-03-09 01:02:10.400186 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:02:10.400197 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:02:10.400208 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:02:10.400218 | orchestrator | 2026-03-09 01:02:10.400229 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-09 01:02:10.400240 | orchestrator | Monday 09 March 2026 01:00:12 +0000 (0:00:00.610) 0:00:20.256 ********** 2026-03-09 01:02:10.400251 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:02:10.400262 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:02:10.400273 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:02:10.400283 | orchestrator | 2026-03-09 01:02:10.400294 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-09 01:02:10.400305 | orchestrator | Monday 09 March 2026 01:00:13 +0000 (0:00:00.732) 0:00:20.989 ********** 2026-03-09 01:02:10.400316 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:02:10.400327 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:02:10.400337 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:02:10.400348 | orchestrator | 2026-03-09 01:02:10.400372 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-09 01:02:10.400384 | orchestrator | Monday 09 March 2026 01:00:13 +0000 (0:00:00.343) 0:00:21.332 ********** 2026-03-09 01:02:10.400405 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:02:10.400416 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:02:10.400427 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:02:10.400437 | orchestrator | 2026-03-09 01:02:10.400448 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-09 01:02:10.400459 | orchestrator | Monday 09 March 2026 01:00:14 +0000 (0:00:00.427) 0:00:21.759 ********** 2026-03-09 01:02:10.400470 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:02:10.400480 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:02:10.400491 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:02:10.400502 | orchestrator | 2026-03-09 01:02:10.400513 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-09 01:02:10.400529 | orchestrator | Monday 09 March 2026 01:00:14 +0000 (0:00:00.561) 0:00:22.320 ********** 2026-03-09 01:02:10.400541 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-09 01:02:10.400552 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-09 01:02:10.400562 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-09 01:02:10.400573 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-09 01:02:10.400584 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-09 01:02:10.400595 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-09 01:02:10.400605 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-09 01:02:10.400616 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-09 01:02:10.400627 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-09 01:02:10.400638 | orchestrator | 2026-03-09 01:02:10.400648 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-09 01:02:10.400659 | orchestrator | Monday 09 March 2026 01:00:15 +0000 (0:00:00.909) 0:00:23.230 ********** 2026-03-09 01:02:10.400670 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-09 01:02:10.400681 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-09 01:02:10.400692 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-09 01:02:10.400703 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:02:10.400714 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-09 01:02:10.400724 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-09 01:02:10.400735 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-09 01:02:10.400746 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:02:10.400756 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-09 01:02:10.400767 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-09 01:02:10.400778 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-09 01:02:10.400788 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:02:10.400825 | orchestrator | 2026-03-09 01:02:10.400844 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-09 01:02:10.400864 | orchestrator | Monday 09 March 2026 01:00:16 +0000 (0:00:00.415) 0:00:23.645 ********** 2026-03-09 01:02:10.400884 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:02:10.400903 | orchestrator | 2026-03-09 01:02:10.400914 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-09 01:02:10.400926 | orchestrator | Monday 09 March 2026 01:00:17 +0000 (0:00:00.968) 0:00:24.614 ********** 2026-03-09 01:02:10.400943 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:02:10.400954 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:02:10.400965 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:02:10.400975 | orchestrator | 2026-03-09 01:02:10.400991 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-09 01:02:10.401002 | orchestrator | Monday 09 March 2026 01:00:17 +0000 (0:00:00.354) 0:00:24.968 ********** 2026-03-09 01:02:10.401013 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:02:10.401024 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:02:10.401035 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:02:10.401045 | orchestrator | 2026-03-09 01:02:10.401056 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-09 01:02:10.401067 | orchestrator | Monday 09 March 2026 01:00:17 +0000 (0:00:00.344) 0:00:25.312 ********** 2026-03-09 01:02:10.401078 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:02:10.401089 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:02:10.401099 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:02:10.401110 | orchestrator | 2026-03-09 01:02:10.401128 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-09 01:02:10.401139 | orchestrator | Monday 09 March 2026 01:00:18 +0000 (0:00:00.366) 0:00:25.679 ********** 2026-03-09 01:02:10.401150 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:02:10.401160 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:02:10.401171 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:02:10.401182 | orchestrator | 2026-03-09 01:02:10.401192 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-09 01:02:10.401203 | orchestrator | Monday 09 March 2026 01:00:19 +0000 (0:00:00.745) 0:00:26.424 ********** 2026-03-09 01:02:10.401214 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 01:02:10.401225 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 01:02:10.401236 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 01:02:10.401247 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:02:10.401257 | orchestrator | 2026-03-09 01:02:10.401268 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-09 01:02:10.401279 | orchestrator | Monday 09 March 2026 01:00:19 +0000 (0:00:00.389) 0:00:26.813 ********** 2026-03-09 01:02:10.401290 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 01:02:10.401300 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 01:02:10.401311 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 01:02:10.401322 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:02:10.401333 | orchestrator | 2026-03-09 01:02:10.401352 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-09 01:02:10.401377 | orchestrator | Monday 09 March 2026 01:00:19 +0000 (0:00:00.408) 0:00:27.222 ********** 2026-03-09 01:02:10.401398 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 01:02:10.401416 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 01:02:10.401434 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 01:02:10.401452 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:02:10.401470 | orchestrator | 2026-03-09 01:02:10.401487 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-09 01:02:10.401507 | orchestrator | Monday 09 March 2026 01:00:20 +0000 (0:00:00.378) 0:00:27.601 ********** 2026-03-09 01:02:10.401525 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:02:10.401544 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:02:10.401562 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:02:10.401581 | orchestrator | 2026-03-09 01:02:10.401600 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-09 01:02:10.401617 | orchestrator | Monday 09 March 2026 01:00:20 +0000 (0:00:00.343) 0:00:27.944 ********** 2026-03-09 01:02:10.401628 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-09 01:02:10.401639 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-09 01:02:10.401649 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-09 01:02:10.401660 | orchestrator | 2026-03-09 01:02:10.401671 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-09 01:02:10.401682 | orchestrator | Monday 09 March 2026 01:00:21 +0000 (0:00:00.509) 0:00:28.454 ********** 2026-03-09 01:02:10.401693 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-09 01:02:10.401704 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-09 01:02:10.401714 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-09 01:02:10.401725 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-09 01:02:10.401736 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-09 01:02:10.401747 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-09 01:02:10.401758 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-09 01:02:10.401779 | orchestrator | 2026-03-09 01:02:10.401790 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-09 01:02:10.401829 | orchestrator | Monday 09 March 2026 01:00:22 +0000 (0:00:01.059) 0:00:29.514 ********** 2026-03-09 01:02:10.401840 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-09 01:02:10.401851 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-09 01:02:10.401862 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-09 01:02:10.401873 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-09 01:02:10.401884 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-09 01:02:10.401894 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-09 01:02:10.401914 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-09 01:02:10.401925 | orchestrator | 2026-03-09 01:02:10.401942 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-03-09 01:02:10.401953 | orchestrator | Monday 09 March 2026 01:00:24 +0000 (0:00:02.251) 0:00:31.766 ********** 2026-03-09 01:02:10.401964 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:02:10.401975 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:02:10.401986 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-03-09 01:02:10.401997 | orchestrator | 2026-03-09 01:02:10.402007 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-03-09 01:02:10.402050 | orchestrator | Monday 09 March 2026 01:00:24 +0000 (0:00:00.396) 0:00:32.163 ********** 2026-03-09 01:02:10.402062 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-09 01:02:10.402074 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-09 01:02:10.402085 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-09 01:02:10.402096 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-09 01:02:10.402107 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-09 01:02:10.402118 | orchestrator | 2026-03-09 01:02:10.402129 | orchestrator | TASK [generate keys] *********************************************************** 2026-03-09 01:02:10.402140 | orchestrator | Monday 09 March 2026 01:01:12 +0000 (0:00:47.379) 0:01:19.543 ********** 2026-03-09 01:02:10.402151 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:02:10.402161 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:02:10.402172 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:02:10.402183 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:02:10.402201 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:02:10.402212 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:02:10.402223 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-03-09 01:02:10.402234 | orchestrator | 2026-03-09 01:02:10.402245 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-03-09 01:02:10.402255 | orchestrator | Monday 09 March 2026 01:01:37 +0000 (0:00:25.185) 0:01:44.728 ********** 2026-03-09 01:02:10.402266 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:02:10.402277 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:02:10.402287 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:02:10.402298 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:02:10.402309 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:02:10.402319 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:02:10.402330 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-09 01:02:10.402341 | orchestrator | 2026-03-09 01:02:10.402351 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-03-09 01:02:10.402362 | orchestrator | Monday 09 March 2026 01:01:49 +0000 (0:00:12.180) 0:01:56.909 ********** 2026-03-09 01:02:10.402373 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:02:10.402384 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-09 01:02:10.402395 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-09 01:02:10.402406 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:02:10.402416 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-09 01:02:10.402433 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-09 01:02:10.402449 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:02:10.402460 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-09 01:02:10.402471 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-09 01:02:10.402482 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:02:10.402493 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-09 01:02:10.402503 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-09 01:02:10.402514 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:02:10.402525 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-09 01:02:10.402536 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-09 01:02:10.402547 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:02:10.402558 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-09 01:02:10.402568 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-09 01:02:10.402579 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-03-09 01:02:10.402590 | orchestrator | 2026-03-09 01:02:10.402601 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:02:10.402612 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-09 01:02:10.402631 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-09 01:02:10.402642 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-09 01:02:10.402653 | orchestrator | 2026-03-09 01:02:10.402664 | orchestrator | 2026-03-09 01:02:10.402675 | orchestrator | 2026-03-09 01:02:10.402685 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:02:10.402696 | orchestrator | Monday 09 March 2026 01:02:08 +0000 (0:00:18.792) 0:02:15.701 ********** 2026-03-09 01:02:10.402707 | orchestrator | =============================================================================== 2026-03-09 01:02:10.402718 | orchestrator | create openstack pool(s) ----------------------------------------------- 47.38s 2026-03-09 01:02:10.402729 | orchestrator | generate keys ---------------------------------------------------------- 25.19s 2026-03-09 01:02:10.402739 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.79s 2026-03-09 01:02:10.402750 | orchestrator | get keys from monitors ------------------------------------------------- 12.18s 2026-03-09 01:02:10.402761 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.30s 2026-03-09 01:02:10.402772 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.25s 2026-03-09 01:02:10.402782 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.86s 2026-03-09 01:02:10.402816 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.06s 2026-03-09 01:02:10.402836 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.97s 2026-03-09 01:02:10.402855 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.92s 2026-03-09 01:02:10.402874 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.91s 2026-03-09 01:02:10.402893 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.87s 2026-03-09 01:02:10.402904 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.75s 2026-03-09 01:02:10.402915 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.75s 2026-03-09 01:02:10.402926 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.73s 2026-03-09 01:02:10.402936 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.70s 2026-03-09 01:02:10.402947 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.68s 2026-03-09 01:02:10.402958 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.68s 2026-03-09 01:02:10.402969 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.67s 2026-03-09 01:02:10.402979 | orchestrator | ceph-facts : Set default osd_pool_default_crush_rule fact --------------- 0.61s 2026-03-09 01:02:10.402990 | orchestrator | 2026-03-09 01:02:10 | INFO  | Task 74d2ef40-6d86-4d0f-bb6e-75acd8c78505 is in state STARTED 2026-03-09 01:02:10.403001 | orchestrator | 2026-03-09 01:02:10 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state STARTED 2026-03-09 01:02:10.403012 | orchestrator | 2026-03-09 01:02:10 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:02:10.403023 | orchestrator | 2026-03-09 01:02:10 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:13.447675 | orchestrator | 2026-03-09 01:02:13 | INFO  | Task 74d2ef40-6d86-4d0f-bb6e-75acd8c78505 is in state STARTED 2026-03-09 01:02:13.448967 | orchestrator | 2026-03-09 01:02:13 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state STARTED 2026-03-09 01:02:13.451122 | orchestrator | 2026-03-09 01:02:13 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:02:13.451166 | orchestrator | 2026-03-09 01:02:13 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:16.495227 | orchestrator | 2026-03-09 01:02:16 | INFO  | Task 74d2ef40-6d86-4d0f-bb6e-75acd8c78505 is in state STARTED 2026-03-09 01:02:16.496081 | orchestrator | 2026-03-09 01:02:16 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state STARTED 2026-03-09 01:02:16.497435 | orchestrator | 2026-03-09 01:02:16 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:02:16.497929 | orchestrator | 2026-03-09 01:02:16 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:19.534523 | orchestrator | 2026-03-09 01:02:19 | INFO  | Task 74d2ef40-6d86-4d0f-bb6e-75acd8c78505 is in state STARTED 2026-03-09 01:02:19.536513 | orchestrator | 2026-03-09 01:02:19 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state STARTED 2026-03-09 01:02:19.538132 | orchestrator | 2026-03-09 01:02:19 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:02:19.538202 | orchestrator | 2026-03-09 01:02:19 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:22.588278 | orchestrator | 2026-03-09 01:02:22 | INFO  | Task 74d2ef40-6d86-4d0f-bb6e-75acd8c78505 is in state STARTED 2026-03-09 01:02:22.590007 | orchestrator | 2026-03-09 01:02:22 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state STARTED 2026-03-09 01:02:22.593467 | orchestrator | 2026-03-09 01:02:22 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:02:22.593545 | orchestrator | 2026-03-09 01:02:22 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:25.636263 | orchestrator | 2026-03-09 01:02:25 | INFO  | Task 74d2ef40-6d86-4d0f-bb6e-75acd8c78505 is in state STARTED 2026-03-09 01:02:25.637372 | orchestrator | 2026-03-09 01:02:25 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state STARTED 2026-03-09 01:02:25.638751 | orchestrator | 2026-03-09 01:02:25 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:02:25.638791 | orchestrator | 2026-03-09 01:02:25 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:28.691470 | orchestrator | 2026-03-09 01:02:28 | INFO  | Task 74d2ef40-6d86-4d0f-bb6e-75acd8c78505 is in state STARTED 2026-03-09 01:02:28.693346 | orchestrator | 2026-03-09 01:02:28 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state STARTED 2026-03-09 01:02:28.695935 | orchestrator | 2026-03-09 01:02:28 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:02:28.696001 | orchestrator | 2026-03-09 01:02:28 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:31.751616 | orchestrator | 2026-03-09 01:02:31 | INFO  | Task 74d2ef40-6d86-4d0f-bb6e-75acd8c78505 is in state STARTED 2026-03-09 01:02:31.753902 | orchestrator | 2026-03-09 01:02:31 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state STARTED 2026-03-09 01:02:31.755664 | orchestrator | 2026-03-09 01:02:31 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:02:31.755714 | orchestrator | 2026-03-09 01:02:31 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:34.819448 | orchestrator | 2026-03-09 01:02:34 | INFO  | Task 74d2ef40-6d86-4d0f-bb6e-75acd8c78505 is in state STARTED 2026-03-09 01:02:34.820480 | orchestrator | 2026-03-09 01:02:34 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state STARTED 2026-03-09 01:02:34.822357 | orchestrator | 2026-03-09 01:02:34 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:02:34.822480 | orchestrator | 2026-03-09 01:02:34 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:37.908026 | orchestrator | 2026-03-09 01:02:37 | INFO  | Task 74d2ef40-6d86-4d0f-bb6e-75acd8c78505 is in state STARTED 2026-03-09 01:02:37.911098 | orchestrator | 2026-03-09 01:02:37 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state STARTED 2026-03-09 01:02:37.917330 | orchestrator | 2026-03-09 01:02:37 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:02:37.917419 | orchestrator | 2026-03-09 01:02:37 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:40.978269 | orchestrator | 2026-03-09 01:02:40 | INFO  | Task 74d2ef40-6d86-4d0f-bb6e-75acd8c78505 is in state STARTED 2026-03-09 01:02:40.978385 | orchestrator | 2026-03-09 01:02:40 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state STARTED 2026-03-09 01:02:40.978688 | orchestrator | 2026-03-09 01:02:40 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:02:40.978710 | orchestrator | 2026-03-09 01:02:40 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:44.026153 | orchestrator | 2026-03-09 01:02:44 | INFO  | Task 74d2ef40-6d86-4d0f-bb6e-75acd8c78505 is in state STARTED 2026-03-09 01:02:44.026241 | orchestrator | 2026-03-09 01:02:44 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state STARTED 2026-03-09 01:02:44.026797 | orchestrator | 2026-03-09 01:02:44 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:02:44.026816 | orchestrator | 2026-03-09 01:02:44 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:47.078307 | orchestrator | 2026-03-09 01:02:47 | INFO  | Task 74d2ef40-6d86-4d0f-bb6e-75acd8c78505 is in state STARTED 2026-03-09 01:02:47.078548 | orchestrator | 2026-03-09 01:02:47 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state STARTED 2026-03-09 01:02:47.079840 | orchestrator | 2026-03-09 01:02:47 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:02:47.079999 | orchestrator | 2026-03-09 01:02:47 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:50.124527 | orchestrator | 2026-03-09 01:02:50 | INFO  | Task 74d2ef40-6d86-4d0f-bb6e-75acd8c78505 is in state STARTED 2026-03-09 01:02:50.127390 | orchestrator | 2026-03-09 01:02:50 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state STARTED 2026-03-09 01:02:50.131462 | orchestrator | 2026-03-09 01:02:50 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:02:50.131574 | orchestrator | 2026-03-09 01:02:50 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:53.181334 | orchestrator | 2026-03-09 01:02:53 | INFO  | Task df6cc46a-b511-4313-936a-18055421a480 is in state STARTED 2026-03-09 01:02:53.181830 | orchestrator | 2026-03-09 01:02:53 | INFO  | Task 74d2ef40-6d86-4d0f-bb6e-75acd8c78505 is in state SUCCESS 2026-03-09 01:02:53.183405 | orchestrator | 2026-03-09 01:02:53 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state STARTED 2026-03-09 01:02:53.187457 | orchestrator | 2026-03-09 01:02:53 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:02:53.187560 | orchestrator | 2026-03-09 01:02:53 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:56.248444 | orchestrator | 2026-03-09 01:02:56 | INFO  | Task df6cc46a-b511-4313-936a-18055421a480 is in state STARTED 2026-03-09 01:02:56.251559 | orchestrator | 2026-03-09 01:02:56 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state STARTED 2026-03-09 01:02:56.253707 | orchestrator | 2026-03-09 01:02:56 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:02:56.254141 | orchestrator | 2026-03-09 01:02:56 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:59.302534 | orchestrator | 2026-03-09 01:02:59 | INFO  | Task df6cc46a-b511-4313-936a-18055421a480 is in state STARTED 2026-03-09 01:02:59.303755 | orchestrator | 2026-03-09 01:02:59 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state STARTED 2026-03-09 01:02:59.305925 | orchestrator | 2026-03-09 01:02:59 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:02:59.305970 | orchestrator | 2026-03-09 01:02:59 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:02.352212 | orchestrator | 2026-03-09 01:03:02 | INFO  | Task df6cc46a-b511-4313-936a-18055421a480 is in state STARTED 2026-03-09 01:03:02.353321 | orchestrator | 2026-03-09 01:03:02 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state STARTED 2026-03-09 01:03:02.355697 | orchestrator | 2026-03-09 01:03:02 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:03:02.356114 | orchestrator | 2026-03-09 01:03:02 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:05.413597 | orchestrator | 2026-03-09 01:03:05 | INFO  | Task df6cc46a-b511-4313-936a-18055421a480 is in state STARTED 2026-03-09 01:03:05.416477 | orchestrator | 2026-03-09 01:03:05 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state STARTED 2026-03-09 01:03:05.420861 | orchestrator | 2026-03-09 01:03:05 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:03:05.420989 | orchestrator | 2026-03-09 01:03:05 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:08.466265 | orchestrator | 2026-03-09 01:03:08 | INFO  | Task df6cc46a-b511-4313-936a-18055421a480 is in state STARTED 2026-03-09 01:03:08.470306 | orchestrator | 2026-03-09 01:03:08 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state STARTED 2026-03-09 01:03:08.474651 | orchestrator | 2026-03-09 01:03:08 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:03:08.474743 | orchestrator | 2026-03-09 01:03:08 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:11.526980 | orchestrator | 2026-03-09 01:03:11 | INFO  | Task df6cc46a-b511-4313-936a-18055421a480 is in state STARTED 2026-03-09 01:03:11.529014 | orchestrator | 2026-03-09 01:03:11 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state STARTED 2026-03-09 01:03:11.530868 | orchestrator | 2026-03-09 01:03:11 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:03:11.532520 | orchestrator | 2026-03-09 01:03:11 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:14.582332 | orchestrator | 2026-03-09 01:03:14 | INFO  | Task df6cc46a-b511-4313-936a-18055421a480 is in state STARTED 2026-03-09 01:03:14.584962 | orchestrator | 2026-03-09 01:03:14 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state STARTED 2026-03-09 01:03:14.586128 | orchestrator | 2026-03-09 01:03:14 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:03:14.586168 | orchestrator | 2026-03-09 01:03:14 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:17.657296 | orchestrator | 2026-03-09 01:03:17 | INFO  | Task df6cc46a-b511-4313-936a-18055421a480 is in state STARTED 2026-03-09 01:03:17.661602 | orchestrator | 2026-03-09 01:03:17 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state STARTED 2026-03-09 01:03:17.665053 | orchestrator | 2026-03-09 01:03:17 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:03:17.665129 | orchestrator | 2026-03-09 01:03:17 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:20.717874 | orchestrator | 2026-03-09 01:03:20 | INFO  | Task df6cc46a-b511-4313-936a-18055421a480 is in state STARTED 2026-03-09 01:03:20.720067 | orchestrator | 2026-03-09 01:03:20 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state STARTED 2026-03-09 01:03:20.721780 | orchestrator | 2026-03-09 01:03:20 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:03:20.721860 | orchestrator | 2026-03-09 01:03:20 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:23.777422 | orchestrator | 2026-03-09 01:03:23 | INFO  | Task df6cc46a-b511-4313-936a-18055421a480 is in state STARTED 2026-03-09 01:03:23.778438 | orchestrator | 2026-03-09 01:03:23 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state STARTED 2026-03-09 01:03:23.779608 | orchestrator | 2026-03-09 01:03:23 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:03:23.779624 | orchestrator | 2026-03-09 01:03:23 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:26.824237 | orchestrator | 2026-03-09 01:03:26 | INFO  | Task df6cc46a-b511-4313-936a-18055421a480 is in state STARTED 2026-03-09 01:03:26.826876 | orchestrator | 2026-03-09 01:03:26 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state STARTED 2026-03-09 01:03:26.829249 | orchestrator | 2026-03-09 01:03:26 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:03:26.829295 | orchestrator | 2026-03-09 01:03:26 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:29.874184 | orchestrator | 2026-03-09 01:03:29 | INFO  | Task df6cc46a-b511-4313-936a-18055421a480 is in state STARTED 2026-03-09 01:03:29.876384 | orchestrator | 2026-03-09 01:03:29 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state STARTED 2026-03-09 01:03:29.878542 | orchestrator | 2026-03-09 01:03:29 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:03:29.878584 | orchestrator | 2026-03-09 01:03:29 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:32.923442 | orchestrator | 2026-03-09 01:03:32 | INFO  | Task df6cc46a-b511-4313-936a-18055421a480 is in state STARTED 2026-03-09 01:03:32.925262 | orchestrator | 2026-03-09 01:03:32 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state STARTED 2026-03-09 01:03:32.927471 | orchestrator | 2026-03-09 01:03:32 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:03:32.927529 | orchestrator | 2026-03-09 01:03:32 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:35.980044 | orchestrator | 2026-03-09 01:03:35 | INFO  | Task df6cc46a-b511-4313-936a-18055421a480 is in state STARTED 2026-03-09 01:03:35.982125 | orchestrator | 2026-03-09 01:03:35 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state STARTED 2026-03-09 01:03:35.983847 | orchestrator | 2026-03-09 01:03:35 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:03:35.983934 | orchestrator | 2026-03-09 01:03:35 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:39.039029 | orchestrator | 2026-03-09 01:03:39 | INFO  | Task df6cc46a-b511-4313-936a-18055421a480 is in state STARTED 2026-03-09 01:03:39.041860 | orchestrator | 2026-03-09 01:03:39 | INFO  | Task 71941d3f-dd93-41b3-bd16-fb769cedf89f is in state SUCCESS 2026-03-09 01:03:39.043195 | orchestrator | 2026-03-09 01:03:39.043228 | orchestrator | 2026-03-09 01:03:39.043234 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-03-09 01:03:39.043239 | orchestrator | 2026-03-09 01:03:39.043243 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-03-09 01:03:39.043248 | orchestrator | Monday 09 March 2026 01:02:13 +0000 (0:00:00.172) 0:00:00.172 ********** 2026-03-09 01:03:39.043253 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-09 01:03:39.043277 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-09 01:03:39.043281 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-09 01:03:39.043286 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-09 01:03:39.043290 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-09 01:03:39.043294 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-09 01:03:39.043298 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-09 01:03:39.043303 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-09 01:03:39.043306 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-09 01:03:39.043310 | orchestrator | 2026-03-09 01:03:39.043314 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-03-09 01:03:39.043318 | orchestrator | Monday 09 March 2026 01:02:18 +0000 (0:00:04.959) 0:00:05.131 ********** 2026-03-09 01:03:39.043322 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-09 01:03:39.043326 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-09 01:03:39.043330 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-09 01:03:39.043334 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-09 01:03:39.043338 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-09 01:03:39.043342 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-09 01:03:39.043346 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-09 01:03:39.043350 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-09 01:03:39.043354 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-09 01:03:39.043358 | orchestrator | 2026-03-09 01:03:39.043362 | orchestrator | TASK [Create share directory] ************************************************** 2026-03-09 01:03:39.043366 | orchestrator | Monday 09 March 2026 01:02:22 +0000 (0:00:04.510) 0:00:09.642 ********** 2026-03-09 01:03:39.043371 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-09 01:03:39.043375 | orchestrator | 2026-03-09 01:03:39.043379 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-03-09 01:03:39.043383 | orchestrator | Monday 09 March 2026 01:02:23 +0000 (0:00:01.140) 0:00:10.782 ********** 2026-03-09 01:03:39.043387 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-03-09 01:03:39.043391 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-09 01:03:39.043395 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-09 01:03:39.043399 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-03-09 01:03:39.043403 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-09 01:03:39.043407 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-03-09 01:03:39.043412 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-03-09 01:03:39.043415 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-03-09 01:03:39.043423 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-03-09 01:03:39.043427 | orchestrator | 2026-03-09 01:03:39.043441 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-03-09 01:03:39.043445 | orchestrator | Monday 09 March 2026 01:02:40 +0000 (0:00:16.781) 0:00:27.564 ********** 2026-03-09 01:03:39.043449 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-03-09 01:03:39.043453 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-03-09 01:03:39.043457 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-09 01:03:39.043461 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-09 01:03:39.043472 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-09 01:03:39.043477 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-09 01:03:39.043481 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-03-09 01:03:39.043485 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-03-09 01:03:39.043489 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-03-09 01:03:39.043492 | orchestrator | 2026-03-09 01:03:39.043496 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-03-09 01:03:39.043500 | orchestrator | Monday 09 March 2026 01:02:44 +0000 (0:00:03.342) 0:00:30.906 ********** 2026-03-09 01:03:39.043505 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-03-09 01:03:39.043509 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-09 01:03:39.043516 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-09 01:03:39.043523 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-03-09 01:03:39.043529 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-09 01:03:39.043536 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-03-09 01:03:39.043542 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-03-09 01:03:39.043549 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-03-09 01:03:39.043555 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-03-09 01:03:39.043562 | orchestrator | 2026-03-09 01:03:39.043569 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:03:39.043576 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:03:39.043584 | orchestrator | 2026-03-09 01:03:39.043590 | orchestrator | 2026-03-09 01:03:39.043598 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:03:39.043604 | orchestrator | Monday 09 March 2026 01:02:51 +0000 (0:00:07.526) 0:00:38.432 ********** 2026-03-09 01:03:39.043608 | orchestrator | =============================================================================== 2026-03-09 01:03:39.043612 | orchestrator | Write ceph keys to the share directory --------------------------------- 16.78s 2026-03-09 01:03:39.043616 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.53s 2026-03-09 01:03:39.043620 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.96s 2026-03-09 01:03:39.043624 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.51s 2026-03-09 01:03:39.043627 | orchestrator | Check if target directories exist --------------------------------------- 3.34s 2026-03-09 01:03:39.043631 | orchestrator | Create share directory -------------------------------------------------- 1.14s 2026-03-09 01:03:39.043640 | orchestrator | 2026-03-09 01:03:39.043644 | orchestrator | 2026-03-09 01:03:39.043648 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:03:39.043652 | orchestrator | 2026-03-09 01:03:39.043656 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:03:39.043660 | orchestrator | Monday 09 March 2026 01:01:36 +0000 (0:00:00.292) 0:00:00.292 ********** 2026-03-09 01:03:39.043664 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:03:39.043668 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:03:39.043672 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:03:39.043676 | orchestrator | 2026-03-09 01:03:39.043680 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:03:39.043684 | orchestrator | Monday 09 March 2026 01:01:36 +0000 (0:00:00.333) 0:00:00.626 ********** 2026-03-09 01:03:39.043688 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-03-09 01:03:39.043692 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-03-09 01:03:39.043696 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-03-09 01:03:39.043700 | orchestrator | 2026-03-09 01:03:39.043704 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-03-09 01:03:39.043708 | orchestrator | 2026-03-09 01:03:39.043712 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-09 01:03:39.043716 | orchestrator | Monday 09 March 2026 01:01:36 +0000 (0:00:00.494) 0:00:01.120 ********** 2026-03-09 01:03:39.043720 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:03:39.043724 | orchestrator | 2026-03-09 01:03:39.043728 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-03-09 01:03:39.043735 | orchestrator | Monday 09 March 2026 01:01:37 +0000 (0:00:00.544) 0:00:01.665 ********** 2026-03-09 01:03:39.043748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-09 01:03:39.043762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-09 01:03:39.043772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-09 01:03:39.043780 | orchestrator | 2026-03-09 01:03:39.043785 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-03-09 01:03:39.043789 | orchestrator | Monday 09 March 2026 01:01:38 +0000 (0:00:01.308) 0:00:02.974 ********** 2026-03-09 01:03:39.043793 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:03:39.043797 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:03:39.043801 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:03:39.043805 | orchestrator | 2026-03-09 01:03:39.043810 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-09 01:03:39.043898 | orchestrator | Monday 09 March 2026 01:01:39 +0000 (0:00:00.551) 0:00:03.525 ********** 2026-03-09 01:03:39.043905 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-09 01:03:39.044200 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-09 01:03:39.044219 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-03-09 01:03:39.044225 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-03-09 01:03:39.044232 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-03-09 01:03:39.044238 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-03-09 01:03:39.044245 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-03-09 01:03:39.044251 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-03-09 01:03:39.044258 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-09 01:03:39.044264 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-09 01:03:39.044271 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-03-09 01:03:39.044278 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-03-09 01:03:39.044292 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-03-09 01:03:39.044299 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-03-09 01:03:39.044305 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-03-09 01:03:39.044312 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-03-09 01:03:39.044318 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-09 01:03:39.044325 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-09 01:03:39.044332 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-03-09 01:03:39.044339 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-03-09 01:03:39.044354 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-03-09 01:03:39.044361 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-03-09 01:03:39.044367 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-03-09 01:03:39.044374 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-03-09 01:03:39.044383 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-03-09 01:03:39.044400 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-03-09 01:03:39.044407 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-03-09 01:03:39.044414 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-03-09 01:03:39.044421 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-03-09 01:03:39.044428 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-03-09 01:03:39.044434 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-03-09 01:03:39.044440 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-03-09 01:03:39.044446 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-03-09 01:03:39.044453 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-03-09 01:03:39.044460 | orchestrator | 2026-03-09 01:03:39.044466 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-09 01:03:39.044473 | orchestrator | Monday 09 March 2026 01:01:40 +0000 (0:00:00.817) 0:00:04.342 ********** 2026-03-09 01:03:39.044480 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:03:39.044487 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:03:39.044493 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:03:39.044499 | orchestrator | 2026-03-09 01:03:39.044505 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-09 01:03:39.044511 | orchestrator | Monday 09 March 2026 01:01:40 +0000 (0:00:00.341) 0:00:04.683 ********** 2026-03-09 01:03:39.044517 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:03:39.044524 | orchestrator | 2026-03-09 01:03:39.044530 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-09 01:03:39.044536 | orchestrator | Monday 09 March 2026 01:01:40 +0000 (0:00:00.148) 0:00:04.832 ********** 2026-03-09 01:03:39.044543 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:03:39.044549 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:03:39.044555 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:03:39.044561 | orchestrator | 2026-03-09 01:03:39.044567 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-09 01:03:39.044574 | orchestrator | Monday 09 March 2026 01:01:41 +0000 (0:00:00.513) 0:00:05.346 ********** 2026-03-09 01:03:39.044580 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:03:39.044587 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:03:39.044593 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:03:39.044599 | orchestrator | 2026-03-09 01:03:39.044606 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-09 01:03:39.044613 | orchestrator | Monday 09 March 2026 01:01:41 +0000 (0:00:00.322) 0:00:05.669 ********** 2026-03-09 01:03:39.044617 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:03:39.044621 | orchestrator | 2026-03-09 01:03:39.044625 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-09 01:03:39.044629 | orchestrator | Monday 09 March 2026 01:01:41 +0000 (0:00:00.192) 0:00:05.862 ********** 2026-03-09 01:03:39.044633 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:03:39.044637 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:03:39.044646 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:03:39.044650 | orchestrator | 2026-03-09 01:03:39.044658 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-09 01:03:39.044662 | orchestrator | Monday 09 March 2026 01:01:41 +0000 (0:00:00.339) 0:00:06.201 ********** 2026-03-09 01:03:39.044666 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:03:39.044670 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:03:39.044674 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:03:39.044677 | orchestrator | 2026-03-09 01:03:39.044681 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-09 01:03:39.044685 | orchestrator | Monday 09 March 2026 01:01:42 +0000 (0:00:00.390) 0:00:06.592 ********** 2026-03-09 01:03:39.044689 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:03:39.044693 | orchestrator | 2026-03-09 01:03:39.044697 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-09 01:03:39.044701 | orchestrator | Monday 09 March 2026 01:01:42 +0000 (0:00:00.349) 0:00:06.941 ********** 2026-03-09 01:03:39.044712 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:03:39.044716 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:03:39.044720 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:03:39.044724 | orchestrator | 2026-03-09 01:03:39.044727 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-09 01:03:39.044731 | orchestrator | Monday 09 March 2026 01:01:43 +0000 (0:00:00.321) 0:00:07.263 ********** 2026-03-09 01:03:39.044735 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:03:39.044739 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:03:39.044743 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:03:39.044747 | orchestrator | 2026-03-09 01:03:39.044751 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-09 01:03:39.044755 | orchestrator | Monday 09 March 2026 01:01:43 +0000 (0:00:00.448) 0:00:07.712 ********** 2026-03-09 01:03:39.044759 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:03:39.044763 | orchestrator | 2026-03-09 01:03:39.044767 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-09 01:03:39.044771 | orchestrator | Monday 09 March 2026 01:01:43 +0000 (0:00:00.126) 0:00:07.838 ********** 2026-03-09 01:03:39.044775 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:03:39.044779 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:03:39.044782 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:03:39.044786 | orchestrator | 2026-03-09 01:03:39.044790 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-09 01:03:39.044794 | orchestrator | Monday 09 March 2026 01:01:43 +0000 (0:00:00.306) 0:00:08.145 ********** 2026-03-09 01:03:39.044798 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:03:39.044802 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:03:39.044806 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:03:39.044810 | orchestrator | 2026-03-09 01:03:39.044814 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-09 01:03:39.044818 | orchestrator | Monday 09 March 2026 01:01:44 +0000 (0:00:00.557) 0:00:08.703 ********** 2026-03-09 01:03:39.044822 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:03:39.044826 | orchestrator | 2026-03-09 01:03:39.044830 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-09 01:03:39.044834 | orchestrator | Monday 09 March 2026 01:01:44 +0000 (0:00:00.147) 0:00:08.850 ********** 2026-03-09 01:03:39.044838 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:03:39.044842 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:03:39.044845 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:03:39.044849 | orchestrator | 2026-03-09 01:03:39.044853 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-09 01:03:39.044857 | orchestrator | Monday 09 March 2026 01:01:44 +0000 (0:00:00.302) 0:00:09.152 ********** 2026-03-09 01:03:39.044861 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:03:39.044865 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:03:39.044872 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:03:39.044876 | orchestrator | 2026-03-09 01:03:39.044880 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-09 01:03:39.044884 | orchestrator | Monday 09 March 2026 01:01:45 +0000 (0:00:00.398) 0:00:09.551 ********** 2026-03-09 01:03:39.044888 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:03:39.044892 | orchestrator | 2026-03-09 01:03:39.044896 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-09 01:03:39.044900 | orchestrator | Monday 09 March 2026 01:01:45 +0000 (0:00:00.161) 0:00:09.713 ********** 2026-03-09 01:03:39.044904 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:03:39.044908 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:03:39.044912 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:03:39.044916 | orchestrator | 2026-03-09 01:03:39.044920 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-09 01:03:39.044924 | orchestrator | Monday 09 March 2026 01:01:45 +0000 (0:00:00.304) 0:00:10.017 ********** 2026-03-09 01:03:39.044928 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:03:39.044931 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:03:39.044935 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:03:39.044939 | orchestrator | 2026-03-09 01:03:39.044943 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-09 01:03:39.044947 | orchestrator | Monday 09 March 2026 01:01:46 +0000 (0:00:00.668) 0:00:10.686 ********** 2026-03-09 01:03:39.044951 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:03:39.044977 | orchestrator | 2026-03-09 01:03:39.044982 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-09 01:03:39.044986 | orchestrator | Monday 09 March 2026 01:01:46 +0000 (0:00:00.151) 0:00:10.837 ********** 2026-03-09 01:03:39.044990 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:03:39.044994 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:03:39.044998 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:03:39.045002 | orchestrator | 2026-03-09 01:03:39.045006 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-09 01:03:39.045010 | orchestrator | Monday 09 March 2026 01:01:47 +0000 (0:00:00.389) 0:00:11.227 ********** 2026-03-09 01:03:39.045014 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:03:39.045018 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:03:39.045022 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:03:39.045026 | orchestrator | 2026-03-09 01:03:39.045030 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-09 01:03:39.045034 | orchestrator | Monday 09 March 2026 01:01:47 +0000 (0:00:00.396) 0:00:11.623 ********** 2026-03-09 01:03:39.045041 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:03:39.045045 | orchestrator | 2026-03-09 01:03:39.045049 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-09 01:03:39.045053 | orchestrator | Monday 09 March 2026 01:01:47 +0000 (0:00:00.186) 0:00:11.810 ********** 2026-03-09 01:03:39.045057 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:03:39.045060 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:03:39.045064 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:03:39.045068 | orchestrator | 2026-03-09 01:03:39.045072 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-09 01:03:39.045076 | orchestrator | Monday 09 March 2026 01:01:48 +0000 (0:00:00.672) 0:00:12.483 ********** 2026-03-09 01:03:39.045080 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:03:39.045084 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:03:39.045088 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:03:39.045092 | orchestrator | 2026-03-09 01:03:39.045099 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-09 01:03:39.045103 | orchestrator | Monday 09 March 2026 01:01:48 +0000 (0:00:00.435) 0:00:12.918 ********** 2026-03-09 01:03:39.045107 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:03:39.045111 | orchestrator | 2026-03-09 01:03:39.045115 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-09 01:03:39.045126 | orchestrator | Monday 09 March 2026 01:01:48 +0000 (0:00:00.198) 0:00:13.116 ********** 2026-03-09 01:03:39.045130 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:03:39.045134 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:03:39.045138 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:03:39.045142 | orchestrator | 2026-03-09 01:03:39.045146 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-09 01:03:39.045150 | orchestrator | Monday 09 March 2026 01:01:49 +0000 (0:00:00.330) 0:00:13.447 ********** 2026-03-09 01:03:39.045154 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:03:39.045158 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:03:39.045162 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:03:39.045166 | orchestrator | 2026-03-09 01:03:39.045170 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-09 01:03:39.045174 | orchestrator | Monday 09 March 2026 01:01:49 +0000 (0:00:00.354) 0:00:13.801 ********** 2026-03-09 01:03:39.045178 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:03:39.045182 | orchestrator | 2026-03-09 01:03:39.045186 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-09 01:03:39.045190 | orchestrator | Monday 09 March 2026 01:01:49 +0000 (0:00:00.144) 0:00:13.946 ********** 2026-03-09 01:03:39.045194 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:03:39.045197 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:03:39.045201 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:03:39.045205 | orchestrator | 2026-03-09 01:03:39.045209 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-03-09 01:03:39.045213 | orchestrator | Monday 09 March 2026 01:01:50 +0000 (0:00:00.546) 0:00:14.492 ********** 2026-03-09 01:03:39.045217 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:03:39.045221 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:03:39.045225 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:03:39.045229 | orchestrator | 2026-03-09 01:03:39.045233 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-03-09 01:03:39.045237 | orchestrator | Monday 09 March 2026 01:01:52 +0000 (0:00:02.059) 0:00:16.554 ********** 2026-03-09 01:03:39.045241 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-09 01:03:39.045245 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-09 01:03:39.045249 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-09 01:03:39.045253 | orchestrator | 2026-03-09 01:03:39.045257 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-03-09 01:03:39.045261 | orchestrator | Monday 09 March 2026 01:01:54 +0000 (0:00:02.196) 0:00:18.750 ********** 2026-03-09 01:03:39.045265 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-09 01:03:39.045270 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-09 01:03:39.045274 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-09 01:03:39.045278 | orchestrator | 2026-03-09 01:03:39.045282 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-03-09 01:03:39.045286 | orchestrator | Monday 09 March 2026 01:01:57 +0000 (0:00:02.746) 0:00:21.496 ********** 2026-03-09 01:03:39.045289 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-09 01:03:39.045293 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-09 01:03:39.045297 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-09 01:03:39.045301 | orchestrator | 2026-03-09 01:03:39.045305 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-03-09 01:03:39.045309 | orchestrator | Monday 09 March 2026 01:01:59 +0000 (0:00:02.487) 0:00:23.984 ********** 2026-03-09 01:03:39.045316 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:03:39.045320 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:03:39.045324 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:03:39.045328 | orchestrator | 2026-03-09 01:03:39.045332 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-03-09 01:03:39.045336 | orchestrator | Monday 09 March 2026 01:02:00 +0000 (0:00:00.372) 0:00:24.356 ********** 2026-03-09 01:03:39.045340 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:03:39.045344 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:03:39.045348 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:03:39.045352 | orchestrator | 2026-03-09 01:03:39.045358 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-09 01:03:39.045362 | orchestrator | Monday 09 March 2026 01:02:00 +0000 (0:00:00.334) 0:00:24.690 ********** 2026-03-09 01:03:39.045366 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:03:39.045370 | orchestrator | 2026-03-09 01:03:39.045374 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-03-09 01:03:39.045378 | orchestrator | Monday 09 March 2026 01:02:01 +0000 (0:00:00.872) 0:00:25.562 ********** 2026-03-09 01:03:39.045391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-09 01:03:39.045402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-09 01:03:39.045411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-09 01:03:39.045415 | orchestrator | 2026-03-09 01:03:39.045419 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-03-09 01:03:39.045426 | orchestrator | Monday 09 March 2026 01:02:03 +0000 (0:00:02.069) 0:00:27.632 ********** 2026-03-09 01:03:39.045437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-09 01:03:39.045442 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:03:39.045447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-09 01:03:39.045454 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:03:39.045465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-09 01:03:39.045469 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:03:39.045473 | orchestrator | 2026-03-09 01:03:39.045477 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-03-09 01:03:39.045481 | orchestrator | Monday 09 March 2026 01:02:04 +0000 (0:00:00.741) 0:00:28.373 ********** 2026-03-09 01:03:39.045486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-09 01:03:39.045493 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:03:39.045503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-09 01:03:39.045508 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:03:39.045518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-09 01:03:39.045529 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:03:39.045535 | orchestrator | 2026-03-09 01:03:39.045542 | orchestrator | TASK [service-check-containers : horizon | Check containers] ******************* 2026-03-09 01:03:39.045549 | orchestrator | Monday 09 March 2026 01:02:05 +0000 (0:00:00.936) 0:00:29.309 ********** 2026-03-09 01:03:39.045561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-09 01:03:39.045581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-09 01:03:39.045590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-09 01:03:39.045603 | orchestrator | 2026-03-09 01:03:39.045607 | orchestrator | TASK [service-check-containers : horizon | Notify handlers to restart containers] *** 2026-03-09 01:03:39.045611 | orchestrator | Monday 09 March 2026 01:02:07 +0000 (0:00:02.170) 0:00:31.480 ********** 2026-03-09 01:03:39.045615 | orchestrator | changed: [testbed-node-0] => { 2026-03-09 01:03:39.045619 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:03:39.045623 | orchestrator | } 2026-03-09 01:03:39.045627 | orchestrator | changed: [testbed-node-1] => { 2026-03-09 01:03:39.045631 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:03:39.045635 | orchestrator | } 2026-03-09 01:03:39.045639 | orchestrator | changed: [testbed-node-2] => { 2026-03-09 01:03:39.045642 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:03:39.045646 | orchestrator | } 2026-03-09 01:03:39.045650 | orchestrator | 2026-03-09 01:03:39.045654 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-09 01:03:39.045658 | orchestrator | Monday 09 March 2026 01:02:07 +0000 (0:00:00.353) 0:00:31.833 ********** 2026-03-09 01:03:39.045669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-09 01:03:39.045678 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:03:39.045683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-09 01:03:39.045699 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:03:39.045707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-09 01:03:39.045716 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:03:39.045720 | orchestrator | 2026-03-09 01:03:39.045724 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-09 01:03:39.045728 | orchestrator | Monday 09 March 2026 01:02:08 +0000 (0:00:00.847) 0:00:32.681 ********** 2026-03-09 01:03:39.045732 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:03:39.045736 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:03:39.045740 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:03:39.045744 | orchestrator | 2026-03-09 01:03:39.045748 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-09 01:03:39.045752 | orchestrator | Monday 09 March 2026 01:02:08 +0000 (0:00:00.485) 0:00:33.166 ********** 2026-03-09 01:03:39.045756 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:03:39.045760 | orchestrator | 2026-03-09 01:03:39.045764 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-03-09 01:03:39.045767 | orchestrator | Monday 09 March 2026 01:02:09 +0000 (0:00:00.523) 0:00:33.690 ********** 2026-03-09 01:03:39.045772 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:03:39.045776 | orchestrator | 2026-03-09 01:03:39.045780 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-03-09 01:03:39.045783 | orchestrator | Monday 09 March 2026 01:02:11 +0000 (0:00:02.411) 0:00:36.101 ********** 2026-03-09 01:03:39.045787 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:03:39.045791 | orchestrator | 2026-03-09 01:03:39.045795 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-03-09 01:03:39.045799 | orchestrator | Monday 09 March 2026 01:02:14 +0000 (0:00:02.336) 0:00:38.438 ********** 2026-03-09 01:03:39.045803 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:03:39.045807 | orchestrator | 2026-03-09 01:03:39.045811 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-09 01:03:39.045815 | orchestrator | Monday 09 March 2026 01:02:32 +0000 (0:00:17.922) 0:00:56.361 ********** 2026-03-09 01:03:39.045819 | orchestrator | 2026-03-09 01:03:39.045823 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-09 01:03:39.045827 | orchestrator | Monday 09 March 2026 01:02:32 +0000 (0:00:00.075) 0:00:56.437 ********** 2026-03-09 01:03:39.045831 | orchestrator | 2026-03-09 01:03:39.045835 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-09 01:03:39.045838 | orchestrator | Monday 09 March 2026 01:02:32 +0000 (0:00:00.314) 0:00:56.751 ********** 2026-03-09 01:03:39.045842 | orchestrator | 2026-03-09 01:03:39.045846 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-03-09 01:03:39.045850 | orchestrator | Monday 09 March 2026 01:02:32 +0000 (0:00:00.089) 0:00:56.841 ********** 2026-03-09 01:03:39.045854 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:03:39.045858 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:03:39.045862 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:03:39.045866 | orchestrator | 2026-03-09 01:03:39.045870 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:03:39.045875 | orchestrator | testbed-node-0 : ok=38  changed=12  unreachable=0 failed=0 skipped=26  rescued=0 ignored=0 2026-03-09 01:03:39.045879 | orchestrator | testbed-node-1 : ok=35  changed=9  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-03-09 01:03:39.045890 | orchestrator | testbed-node-2 : ok=35  changed=9  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-03-09 01:03:39.045894 | orchestrator | 2026-03-09 01:03:39.045898 | orchestrator | 2026-03-09 01:03:39.045902 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:03:39.045906 | orchestrator | Monday 09 March 2026 01:03:36 +0000 (0:01:03.704) 0:02:00.545 ********** 2026-03-09 01:03:39.045910 | orchestrator | =============================================================================== 2026-03-09 01:03:39.045913 | orchestrator | horizon : Restart horizon container ------------------------------------ 63.70s 2026-03-09 01:03:39.045917 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 17.92s 2026-03-09 01:03:39.045921 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.75s 2026-03-09 01:03:39.045926 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.49s 2026-03-09 01:03:39.045929 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.41s 2026-03-09 01:03:39.045933 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.34s 2026-03-09 01:03:39.045937 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.20s 2026-03-09 01:03:39.045941 | orchestrator | service-check-containers : horizon | Check containers ------------------- 2.17s 2026-03-09 01:03:39.045945 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 2.07s 2026-03-09 01:03:39.045949 | orchestrator | horizon : Copying over config.json files for services ------------------- 2.06s 2026-03-09 01:03:39.045953 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.31s 2026-03-09 01:03:39.045990 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.94s 2026-03-09 01:03:39.046083 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.87s 2026-03-09 01:03:39.046097 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.85s 2026-03-09 01:03:39.046101 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.82s 2026-03-09 01:03:39.046105 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.74s 2026-03-09 01:03:39.046109 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.67s 2026-03-09 01:03:39.046114 | orchestrator | horizon : Update policy file name --------------------------------------- 0.67s 2026-03-09 01:03:39.046118 | orchestrator | horizon : Update policy file name --------------------------------------- 0.56s 2026-03-09 01:03:39.046122 | orchestrator | horizon : Set empty custom policy --------------------------------------- 0.55s 2026-03-09 01:03:39.046126 | orchestrator | 2026-03-09 01:03:39 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:03:39.046130 | orchestrator | 2026-03-09 01:03:39 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:42.093126 | orchestrator | 2026-03-09 01:03:42 | INFO  | Task df6cc46a-b511-4313-936a-18055421a480 is in state STARTED 2026-03-09 01:03:42.093858 | orchestrator | 2026-03-09 01:03:42 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:03:42.093875 | orchestrator | 2026-03-09 01:03:42 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:45.132213 | orchestrator | 2026-03-09 01:03:45 | INFO  | Task df6cc46a-b511-4313-936a-18055421a480 is in state STARTED 2026-03-09 01:03:45.133077 | orchestrator | 2026-03-09 01:03:45 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:03:45.133132 | orchestrator | 2026-03-09 01:03:45 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:48.181750 | orchestrator | 2026-03-09 01:03:48 | INFO  | Task df6cc46a-b511-4313-936a-18055421a480 is in state STARTED 2026-03-09 01:03:48.181827 | orchestrator | 2026-03-09 01:03:48 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:03:48.181858 | orchestrator | 2026-03-09 01:03:48 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:51.234255 | orchestrator | 2026-03-09 01:03:51 | INFO  | Task df6cc46a-b511-4313-936a-18055421a480 is in state STARTED 2026-03-09 01:03:51.237167 | orchestrator | 2026-03-09 01:03:51 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:03:51.237220 | orchestrator | 2026-03-09 01:03:51 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:54.287530 | orchestrator | 2026-03-09 01:03:54 | INFO  | Task df6cc46a-b511-4313-936a-18055421a480 is in state STARTED 2026-03-09 01:03:54.288654 | orchestrator | 2026-03-09 01:03:54 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:03:54.288704 | orchestrator | 2026-03-09 01:03:54 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:57.334477 | orchestrator | 2026-03-09 01:03:57 | INFO  | Task df6cc46a-b511-4313-936a-18055421a480 is in state SUCCESS 2026-03-09 01:03:57.337076 | orchestrator | 2026-03-09 01:03:57 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:03:57.337136 | orchestrator | 2026-03-09 01:03:57 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:00.394393 | orchestrator | 2026-03-09 01:04:00 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:04:00.395403 | orchestrator | 2026-03-09 01:04:00 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:04:00.396283 | orchestrator | 2026-03-09 01:04:00 | INFO  | Task 27b88d36-a67e-42ff-a64d-dc4a9244f739 is in state STARTED 2026-03-09 01:04:00.397606 | orchestrator | 2026-03-09 01:04:00 | INFO  | Task 025bb8f3-fc8d-48de-aba6-d18d493faca7 is in state STARTED 2026-03-09 01:04:00.397649 | orchestrator | 2026-03-09 01:04:00 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:03.442775 | orchestrator | 2026-03-09 01:04:03 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:04:03.442892 | orchestrator | 2026-03-09 01:04:03 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:04:03.444169 | orchestrator | 2026-03-09 01:04:03 | INFO  | Task 27b88d36-a67e-42ff-a64d-dc4a9244f739 is in state STARTED 2026-03-09 01:04:03.447558 | orchestrator | 2026-03-09 01:04:03 | INFO  | Task 025bb8f3-fc8d-48de-aba6-d18d493faca7 is in state STARTED 2026-03-09 01:04:03.447650 | orchestrator | 2026-03-09 01:04:03 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:06.499611 | orchestrator | 2026-03-09 01:04:06 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:04:06.503221 | orchestrator | 2026-03-09 01:04:06 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:04:06.510269 | orchestrator | 2026-03-09 01:04:06 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:04:06.512781 | orchestrator | 2026-03-09 01:04:06 | INFO  | Task 27b88d36-a67e-42ff-a64d-dc4a9244f739 is in state SUCCESS 2026-03-09 01:04:06.517732 | orchestrator | 2026-03-09 01:04:06 | INFO  | Task 100aac98-7798-4cb7-9af7-309e963bd8d4 is in state STARTED 2026-03-09 01:04:06.521802 | orchestrator | 2026-03-09 01:04:06 | INFO  | Task 025bb8f3-fc8d-48de-aba6-d18d493faca7 is in state STARTED 2026-03-09 01:04:06.521853 | orchestrator | 2026-03-09 01:04:06 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:09.564350 | orchestrator | 2026-03-09 01:04:09 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:04:09.564846 | orchestrator | 2026-03-09 01:04:09 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:04:09.566201 | orchestrator | 2026-03-09 01:04:09 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:04:09.566314 | orchestrator | 2026-03-09 01:04:09 | INFO  | Task 100aac98-7798-4cb7-9af7-309e963bd8d4 is in state STARTED 2026-03-09 01:04:09.567386 | orchestrator | 2026-03-09 01:04:09 | INFO  | Task 025bb8f3-fc8d-48de-aba6-d18d493faca7 is in state STARTED 2026-03-09 01:04:09.567415 | orchestrator | 2026-03-09 01:04:09 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:12.612798 | orchestrator | 2026-03-09 01:04:12 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:04:12.612911 | orchestrator | 2026-03-09 01:04:12 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:04:12.612933 | orchestrator | 2026-03-09 01:04:12 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:04:12.612947 | orchestrator | 2026-03-09 01:04:12 | INFO  | Task 100aac98-7798-4cb7-9af7-309e963bd8d4 is in state STARTED 2026-03-09 01:04:12.612960 | orchestrator | 2026-03-09 01:04:12 | INFO  | Task 025bb8f3-fc8d-48de-aba6-d18d493faca7 is in state STARTED 2026-03-09 01:04:12.612974 | orchestrator | 2026-03-09 01:04:12 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:15.668049 | orchestrator | 2026-03-09 01:04:15 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:04:15.670379 | orchestrator | 2026-03-09 01:04:15 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:04:15.672798 | orchestrator | 2026-03-09 01:04:15 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:04:15.674140 | orchestrator | 2026-03-09 01:04:15 | INFO  | Task 100aac98-7798-4cb7-9af7-309e963bd8d4 is in state STARTED 2026-03-09 01:04:15.675935 | orchestrator | 2026-03-09 01:04:15 | INFO  | Task 025bb8f3-fc8d-48de-aba6-d18d493faca7 is in state STARTED 2026-03-09 01:04:15.675998 | orchestrator | 2026-03-09 01:04:15 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:18.714368 | orchestrator | 2026-03-09 01:04:18 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:04:18.715252 | orchestrator | 2026-03-09 01:04:18 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:04:18.716337 | orchestrator | 2026-03-09 01:04:18 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:04:18.717487 | orchestrator | 2026-03-09 01:04:18 | INFO  | Task 100aac98-7798-4cb7-9af7-309e963bd8d4 is in state STARTED 2026-03-09 01:04:18.719596 | orchestrator | 2026-03-09 01:04:18 | INFO  | Task 025bb8f3-fc8d-48de-aba6-d18d493faca7 is in state STARTED 2026-03-09 01:04:18.719635 | orchestrator | 2026-03-09 01:04:18 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:21.922926 | orchestrator | 2026-03-09 01:04:21 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:04:21.923029 | orchestrator | 2026-03-09 01:04:21 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:04:21.923148 | orchestrator | 2026-03-09 01:04:21 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:04:21.923170 | orchestrator | 2026-03-09 01:04:21 | INFO  | Task 100aac98-7798-4cb7-9af7-309e963bd8d4 is in state STARTED 2026-03-09 01:04:21.923188 | orchestrator | 2026-03-09 01:04:21 | INFO  | Task 025bb8f3-fc8d-48de-aba6-d18d493faca7 is in state STARTED 2026-03-09 01:04:21.923206 | orchestrator | 2026-03-09 01:04:21 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:24.964299 | orchestrator | 2026-03-09 01:04:24 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state STARTED 2026-03-09 01:04:24.967863 | orchestrator | 2026-03-09 01:04:24 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:04:24.973317 | orchestrator | 2026-03-09 01:04:24 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:04:24.978276 | orchestrator | 2026-03-09 01:04:24 | INFO  | Task 100aac98-7798-4cb7-9af7-309e963bd8d4 is in state STARTED 2026-03-09 01:04:24.981294 | orchestrator | 2026-03-09 01:04:24 | INFO  | Task 025bb8f3-fc8d-48de-aba6-d18d493faca7 is in state STARTED 2026-03-09 01:04:24.981748 | orchestrator | 2026-03-09 01:04:24 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:28.066450 | orchestrator | 2026-03-09 01:04:28.066569 | orchestrator | 2026-03-09 01:04:28.066590 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-09 01:04:28.066605 | orchestrator | 2026-03-09 01:04:28.066619 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-09 01:04:28.066634 | orchestrator | Monday 09 March 2026 01:02:56 +0000 (0:00:00.240) 0:00:00.240 ********** 2026-03-09 01:04:28.066651 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-09 01:04:28.066668 | orchestrator | 2026-03-09 01:04:28.066682 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-09 01:04:28.066696 | orchestrator | Monday 09 March 2026 01:02:56 +0000 (0:00:00.245) 0:00:00.486 ********** 2026-03-09 01:04:28.066710 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-09 01:04:28.066727 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-09 01:04:28.066743 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-09 01:04:28.066758 | orchestrator | 2026-03-09 01:04:28.066771 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-09 01:04:28.066785 | orchestrator | Monday 09 March 2026 01:02:58 +0000 (0:00:01.347) 0:00:01.833 ********** 2026-03-09 01:04:28.066799 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-09 01:04:28.066813 | orchestrator | 2026-03-09 01:04:28.066826 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-09 01:04:28.066839 | orchestrator | Monday 09 March 2026 01:02:59 +0000 (0:00:01.570) 0:00:03.404 ********** 2026-03-09 01:04:28.066853 | orchestrator | changed: [testbed-manager] 2026-03-09 01:04:28.066867 | orchestrator | 2026-03-09 01:04:28.066881 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-09 01:04:28.066915 | orchestrator | Monday 09 March 2026 01:03:00 +0000 (0:00:00.991) 0:00:04.395 ********** 2026-03-09 01:04:28.066930 | orchestrator | changed: [testbed-manager] 2026-03-09 01:04:28.066944 | orchestrator | 2026-03-09 01:04:28.066959 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-09 01:04:28.066972 | orchestrator | Monday 09 March 2026 01:03:01 +0000 (0:00:00.944) 0:00:05.339 ********** 2026-03-09 01:04:28.066985 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-03-09 01:04:28.066999 | orchestrator | ok: [testbed-manager] 2026-03-09 01:04:28.067013 | orchestrator | 2026-03-09 01:04:28.067789 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-09 01:04:28.067811 | orchestrator | Monday 09 March 2026 01:03:45 +0000 (0:00:43.315) 0:00:48.654 ********** 2026-03-09 01:04:28.067825 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-03-09 01:04:28.067839 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-03-09 01:04:28.067853 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-03-09 01:04:28.067895 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-03-09 01:04:28.067909 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-03-09 01:04:28.067923 | orchestrator | 2026-03-09 01:04:28.067937 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-09 01:04:28.067951 | orchestrator | Monday 09 March 2026 01:03:49 +0000 (0:00:04.627) 0:00:53.282 ********** 2026-03-09 01:04:28.067964 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-09 01:04:28.067977 | orchestrator | 2026-03-09 01:04:28.067990 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-09 01:04:28.068004 | orchestrator | Monday 09 March 2026 01:03:50 +0000 (0:00:00.523) 0:00:53.805 ********** 2026-03-09 01:04:28.068019 | orchestrator | skipping: [testbed-manager] 2026-03-09 01:04:28.068066 | orchestrator | 2026-03-09 01:04:28.068081 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-09 01:04:28.068105 | orchestrator | Monday 09 March 2026 01:03:50 +0000 (0:00:00.140) 0:00:53.945 ********** 2026-03-09 01:04:28.068118 | orchestrator | skipping: [testbed-manager] 2026-03-09 01:04:28.068131 | orchestrator | 2026-03-09 01:04:28.068145 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-03-09 01:04:28.068158 | orchestrator | Monday 09 March 2026 01:03:51 +0000 (0:00:00.620) 0:00:54.565 ********** 2026-03-09 01:04:28.068172 | orchestrator | changed: [testbed-manager] 2026-03-09 01:04:28.068186 | orchestrator | 2026-03-09 01:04:28.068200 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-03-09 01:04:28.068216 | orchestrator | Monday 09 March 2026 01:03:52 +0000 (0:00:01.646) 0:00:56.211 ********** 2026-03-09 01:04:28.068230 | orchestrator | changed: [testbed-manager] 2026-03-09 01:04:28.068243 | orchestrator | 2026-03-09 01:04:28.068257 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-03-09 01:04:28.068271 | orchestrator | Monday 09 March 2026 01:03:53 +0000 (0:00:00.811) 0:00:57.023 ********** 2026-03-09 01:04:28.068285 | orchestrator | changed: [testbed-manager] 2026-03-09 01:04:28.068300 | orchestrator | 2026-03-09 01:04:28.068314 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-03-09 01:04:28.068328 | orchestrator | Monday 09 March 2026 01:03:54 +0000 (0:00:00.659) 0:00:57.682 ********** 2026-03-09 01:04:28.068343 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-09 01:04:28.068357 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-09 01:04:28.068370 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-09 01:04:28.068385 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-09 01:04:28.068398 | orchestrator | 2026-03-09 01:04:28.068411 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:04:28.068426 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 01:04:28.068441 | orchestrator | 2026-03-09 01:04:28.068455 | orchestrator | 2026-03-09 01:04:28.068536 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:04:28.068552 | orchestrator | Monday 09 March 2026 01:03:55 +0000 (0:00:01.654) 0:00:59.336 ********** 2026-03-09 01:04:28.068565 | orchestrator | =============================================================================== 2026-03-09 01:04:28.068579 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 43.32s 2026-03-09 01:04:28.068594 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.63s 2026-03-09 01:04:28.068607 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.65s 2026-03-09 01:04:28.068620 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.65s 2026-03-09 01:04:28.068632 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.57s 2026-03-09 01:04:28.068645 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.35s 2026-03-09 01:04:28.068658 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.99s 2026-03-09 01:04:28.068703 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.94s 2026-03-09 01:04:28.068718 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.81s 2026-03-09 01:04:28.068731 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.66s 2026-03-09 01:04:28.068744 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.62s 2026-03-09 01:04:28.068752 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.52s 2026-03-09 01:04:28.068760 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.25s 2026-03-09 01:04:28.068768 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2026-03-09 01:04:28.068776 | orchestrator | 2026-03-09 01:04:28.068784 | orchestrator | 2026-03-09 01:04:28.068791 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:04:28.068799 | orchestrator | 2026-03-09 01:04:28.068824 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:04:28.068839 | orchestrator | Monday 09 March 2026 01:04:01 +0000 (0:00:00.201) 0:00:00.201 ********** 2026-03-09 01:04:28.068853 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:04:28.068922 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:04:28.068932 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:04:28.068940 | orchestrator | 2026-03-09 01:04:28.068948 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:04:28.069176 | orchestrator | Monday 09 March 2026 01:04:01 +0000 (0:00:00.357) 0:00:00.559 ********** 2026-03-09 01:04:28.069185 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-09 01:04:28.069193 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-09 01:04:28.069201 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-09 01:04:28.069209 | orchestrator | 2026-03-09 01:04:28.069218 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-03-09 01:04:28.069225 | orchestrator | 2026-03-09 01:04:28.069233 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-03-09 01:04:28.069241 | orchestrator | Monday 09 March 2026 01:04:02 +0000 (0:00:00.920) 0:00:01.479 ********** 2026-03-09 01:04:28.069249 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:04:28.069257 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:04:28.069264 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:04:28.069271 | orchestrator | 2026-03-09 01:04:28.069278 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:04:28.069285 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:04:28.069293 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:04:28.069300 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:04:28.069307 | orchestrator | 2026-03-09 01:04:28.069314 | orchestrator | 2026-03-09 01:04:28.069320 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:04:28.069327 | orchestrator | Monday 09 March 2026 01:04:03 +0000 (0:00:00.885) 0:00:02.364 ********** 2026-03-09 01:04:28.069334 | orchestrator | =============================================================================== 2026-03-09 01:04:28.069340 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.92s 2026-03-09 01:04:28.069352 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.89s 2026-03-09 01:04:28.069364 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2026-03-09 01:04:28.069374 | orchestrator | 2026-03-09 01:04:28.069382 | orchestrator | 2026-03-09 01:04:28 | INFO  | Task 470fccfd-2788-4fc2-a1f3-6f7198b40670 is in state SUCCESS 2026-03-09 01:04:28.069389 | orchestrator | 2026-03-09 01:04:28.069396 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:04:28.069412 | orchestrator | 2026-03-09 01:04:28.069419 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:04:28.069425 | orchestrator | Monday 09 March 2026 01:01:36 +0000 (0:00:00.300) 0:00:00.300 ********** 2026-03-09 01:04:28.069432 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:04:28.069439 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:04:28.069446 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:04:28.069452 | orchestrator | 2026-03-09 01:04:28.069459 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:04:28.069466 | orchestrator | Monday 09 March 2026 01:01:36 +0000 (0:00:00.317) 0:00:00.617 ********** 2026-03-09 01:04:28.069472 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-09 01:04:28.069479 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-09 01:04:28.069517 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-09 01:04:28.069525 | orchestrator | 2026-03-09 01:04:28.069532 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-03-09 01:04:28.069539 | orchestrator | 2026-03-09 01:04:28.069546 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-09 01:04:28.069552 | orchestrator | Monday 09 March 2026 01:01:36 +0000 (0:00:00.460) 0:00:01.077 ********** 2026-03-09 01:04:28.069559 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:04:28.069566 | orchestrator | 2026-03-09 01:04:28.069572 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-03-09 01:04:28.069579 | orchestrator | Monday 09 March 2026 01:01:37 +0000 (0:00:00.623) 0:00:01.701 ********** 2026-03-09 01:04:28.069597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-09 01:04:28.069609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-09 01:04:28.069618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-09 01:04:28.069653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-09 01:04:28.069663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-09 01:04:28.069675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-09 01:04:28.069683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:04:28.069690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:04:28.069703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:04:28.069710 | orchestrator | 2026-03-09 01:04:28.069717 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-03-09 01:04:28.069724 | orchestrator | Monday 09 March 2026 01:01:39 +0000 (0:00:02.025) 0:00:03.726 ********** 2026-03-09 01:04:28.069730 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:28.069737 | orchestrator | 2026-03-09 01:04:28.069744 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-03-09 01:04:28.069751 | orchestrator | Monday 09 March 2026 01:01:39 +0000 (0:00:00.168) 0:00:03.894 ********** 2026-03-09 01:04:28.069757 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:28.069764 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:04:28.069771 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:04:28.069777 | orchestrator | 2026-03-09 01:04:28.069784 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-03-09 01:04:28.069791 | orchestrator | Monday 09 March 2026 01:01:40 +0000 (0:00:00.486) 0:00:04.381 ********** 2026-03-09 01:04:28.069798 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 01:04:28.069804 | orchestrator | 2026-03-09 01:04:28.069811 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-09 01:04:28.069837 | orchestrator | Monday 09 March 2026 01:01:41 +0000 (0:00:00.985) 0:00:05.367 ********** 2026-03-09 01:04:28.069845 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:04:28.069852 | orchestrator | 2026-03-09 01:04:28.069858 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-03-09 01:04:28.069865 | orchestrator | Monday 09 March 2026 01:01:41 +0000 (0:00:00.548) 0:00:05.915 ********** 2026-03-09 01:04:28.069876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-09 01:04:28.069885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-09 01:04:28.069897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-09 01:04:28.069923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-09 01:04:28.069932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-09 01:04:28.069939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-09 01:04:28.069949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:04:28.069961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:04:28.069968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:04:28.069975 | orchestrator | 2026-03-09 01:04:28.069982 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-03-09 01:04:28.069989 | orchestrator | Monday 09 March 2026 01:01:45 +0000 (0:00:04.126) 0:00:10.042 ********** 2026-03-09 01:04:28.070002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-09 01:04:28.070010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 01:04:28.070083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 01:04:28.070094 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:28.070112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-09 01:04:28.070120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 01:04:28.070127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 01:04:28.070134 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:04:28.070151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-09 01:04:28.070163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 01:04:28.070174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 01:04:28.070181 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:04:28.070188 | orchestrator | 2026-03-09 01:04:28.070195 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-03-09 01:04:28.070202 | orchestrator | Monday 09 March 2026 01:01:46 +0000 (0:00:00.762) 0:00:10.805 ********** 2026-03-09 01:04:28.070209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-09 01:04:28.070217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 01:04:28.070230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 01:04:28.070314 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:28.070328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-09 01:04:28.070341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 01:04:28.070349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 01:04:28.070356 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:04:28.070363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-09 01:04:28.070378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 01:04:28.070386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 01:04:28.070397 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:04:28.070404 | orchestrator | 2026-03-09 01:04:28.070411 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-03-09 01:04:28.070418 | orchestrator | Monday 09 March 2026 01:01:47 +0000 (0:00:00.915) 0:00:11.720 ********** 2026-03-09 01:04:28.070428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-09 01:04:28.070437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-09 01:04:28.070450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-09 01:04:28.070458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-09 01:04:28.070474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-09 01:04:28.070481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-09 01:04:28.070488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:04:28.070495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:04:28.070502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:04:28.070509 | orchestrator | 2026-03-09 01:04:28.070516 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-03-09 01:04:28.070528 | orchestrator | Monday 09 March 2026 01:01:51 +0000 (0:00:03.727) 0:00:15.447 ********** 2026-03-09 01:04:28.070535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-09 01:04:28.070550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 01:04:28.070558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-09 01:04:28.070566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 01:04:28.070586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-09 01:04:28.070606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 01:04:28.070620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:04:28.070633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:04:28.070645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:04:28.070652 | orchestrator | 2026-03-09 01:04:28.070659 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-03-09 01:04:28.070666 | orchestrator | Monday 09 March 2026 01:01:57 +0000 (0:00:06.426) 0:00:21.874 ********** 2026-03-09 01:04:28.070673 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:04:28.070680 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:04:28.070686 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:04:28.070693 | orchestrator | 2026-03-09 01:04:28.070700 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-03-09 01:04:28.070706 | orchestrator | Monday 09 March 2026 01:01:59 +0000 (0:00:01.843) 0:00:23.718 ********** 2026-03-09 01:04:28.070713 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:28.070720 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:04:28.070726 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:04:28.070733 | orchestrator | 2026-03-09 01:04:28.070740 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-03-09 01:04:28.070747 | orchestrator | Monday 09 March 2026 01:02:00 +0000 (0:00:00.623) 0:00:24.341 ********** 2026-03-09 01:04:28.070753 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:28.070760 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:04:28.070767 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:04:28.070773 | orchestrator | 2026-03-09 01:04:28.070780 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-03-09 01:04:28.070792 | orchestrator | Monday 09 March 2026 01:02:00 +0000 (0:00:00.423) 0:00:24.765 ********** 2026-03-09 01:04:28.070798 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:28.070805 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:04:28.070812 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:04:28.070818 | orchestrator | 2026-03-09 01:04:28.070825 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-03-09 01:04:28.070832 | orchestrator | Monday 09 March 2026 01:02:01 +0000 (0:00:00.560) 0:00:25.326 ********** 2026-03-09 01:04:28.070844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-09 01:04:28.070856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 01:04:28.070863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 01:04:28.070870 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:28.070877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-09 01:04:28.070896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 01:04:28.070909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 01:04:28.070916 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:04:28.070927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-09 01:04:28.070935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 01:04:28.070942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 01:04:28.070949 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:04:28.070957 | orchestrator | 2026-03-09 01:04:28.070966 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-09 01:04:28.070975 | orchestrator | Monday 09 March 2026 01:02:01 +0000 (0:00:00.824) 0:00:26.150 ********** 2026-03-09 01:04:28.070987 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:28.070996 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:04:28.071004 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:04:28.071012 | orchestrator | 2026-03-09 01:04:28.071021 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-03-09 01:04:28.071029 | orchestrator | Monday 09 March 2026 01:02:02 +0000 (0:00:00.405) 0:00:26.555 ********** 2026-03-09 01:04:28.071036 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-09 01:04:28.071079 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-09 01:04:28.071089 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-09 01:04:28.071095 | orchestrator | 2026-03-09 01:04:28.071102 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-03-09 01:04:28.071109 | orchestrator | Monday 09 March 2026 01:02:04 +0000 (0:00:01.831) 0:00:28.386 ********** 2026-03-09 01:04:28.071116 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 01:04:28.071122 | orchestrator | 2026-03-09 01:04:28.071129 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-03-09 01:04:28.071136 | orchestrator | Monday 09 March 2026 01:02:05 +0000 (0:00:01.221) 0:00:29.608 ********** 2026-03-09 01:04:28.071142 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:28.071149 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:04:28.071156 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:04:28.071162 | orchestrator | 2026-03-09 01:04:28.071174 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-03-09 01:04:28.071181 | orchestrator | Monday 09 March 2026 01:02:06 +0000 (0:00:01.232) 0:00:30.841 ********** 2026-03-09 01:04:28.071188 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 01:04:28.071194 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-09 01:04:28.071201 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-09 01:04:28.071208 | orchestrator | 2026-03-09 01:04:28.071214 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-03-09 01:04:28.071221 | orchestrator | Monday 09 March 2026 01:02:07 +0000 (0:00:01.113) 0:00:31.954 ********** 2026-03-09 01:04:28.071228 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:04:28.071237 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:04:28.071248 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:04:28.071260 | orchestrator | 2026-03-09 01:04:28.071272 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-03-09 01:04:28.071279 | orchestrator | Monday 09 March 2026 01:02:08 +0000 (0:00:00.360) 0:00:32.315 ********** 2026-03-09 01:04:28.071285 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-09 01:04:28.071292 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-09 01:04:28.071299 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-09 01:04:28.071305 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-09 01:04:28.071312 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-09 01:04:28.071319 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-09 01:04:28.071329 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-09 01:04:28.071337 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-09 01:04:28.071343 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-09 01:04:28.071350 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-09 01:04:28.071363 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-09 01:04:28.071369 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-09 01:04:28.071376 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-09 01:04:28.071383 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-09 01:04:28.071389 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-09 01:04:28.071396 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-09 01:04:28.071403 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-09 01:04:28.071409 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-09 01:04:28.071416 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-09 01:04:28.071423 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-09 01:04:28.071430 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-09 01:04:28.071436 | orchestrator | 2026-03-09 01:04:28.071443 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-03-09 01:04:28.071450 | orchestrator | Monday 09 March 2026 01:02:17 +0000 (0:00:09.701) 0:00:42.017 ********** 2026-03-09 01:04:28.071456 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-09 01:04:28.071463 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-09 01:04:28.071469 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-09 01:04:28.071476 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-09 01:04:28.071483 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-09 01:04:28.071489 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-09 01:04:28.071496 | orchestrator | 2026-03-09 01:04:28.071502 | orchestrator | TASK [service-check-containers : keystone | Check containers] ****************** 2026-03-09 01:04:28.071509 | orchestrator | Monday 09 March 2026 01:02:21 +0000 (0:00:03.398) 0:00:45.415 ********** 2026-03-09 01:04:28.071522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-09 01:04:28.071534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-09 01:04:28.071546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-09 01:04:28.071554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-09 01:04:28.071561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-09 01:04:28.071573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-09 01:04:28.071580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:04:28.071595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:04:28.071602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:04:28.071609 | orchestrator | 2026-03-09 01:04:28.071616 | orchestrator | TASK [service-check-containers : keystone | Notify handlers to restart containers] *** 2026-03-09 01:04:28.071623 | orchestrator | Monday 09 March 2026 01:02:23 +0000 (0:00:02.689) 0:00:48.104 ********** 2026-03-09 01:04:28.071629 | orchestrator | changed: [testbed-node-0] => { 2026-03-09 01:04:28.071636 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:04:28.071643 | orchestrator | } 2026-03-09 01:04:28.071650 | orchestrator | changed: [testbed-node-1] => { 2026-03-09 01:04:28.071657 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:04:28.071663 | orchestrator | } 2026-03-09 01:04:28.071670 | orchestrator | changed: [testbed-node-2] => { 2026-03-09 01:04:28.071677 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:04:28.071683 | orchestrator | } 2026-03-09 01:04:28.071690 | orchestrator | 2026-03-09 01:04:28.071697 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-09 01:04:28.071703 | orchestrator | Monday 09 March 2026 01:02:24 +0000 (0:00:00.370) 0:00:48.475 ********** 2026-03-09 01:04:28.071711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-09 01:04:28.071722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 01:04:28.071733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 01:04:28.071740 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:28.071751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-09 01:04:28.071758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 01:04:28.071766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 01:04:28.071773 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:04:28.071785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-09 01:04:28.071797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 01:04:28.071808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 01:04:28.071815 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:04:28.071821 | orchestrator | 2026-03-09 01:04:28.071828 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-09 01:04:28.071835 | orchestrator | Monday 09 March 2026 01:02:25 +0000 (0:00:01.024) 0:00:49.499 ********** 2026-03-09 01:04:28.071842 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:28.071848 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:04:28.071855 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:04:28.071862 | orchestrator | 2026-03-09 01:04:28.071868 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-03-09 01:04:28.071875 | orchestrator | Monday 09 March 2026 01:02:25 +0000 (0:00:00.326) 0:00:49.825 ********** 2026-03-09 01:04:28.071882 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:04:28.071967 | orchestrator | 2026-03-09 01:04:28.071974 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-03-09 01:04:28.071981 | orchestrator | Monday 09 March 2026 01:02:28 +0000 (0:00:02.381) 0:00:52.207 ********** 2026-03-09 01:04:28.071988 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:04:28.071994 | orchestrator | 2026-03-09 01:04:28.072001 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-03-09 01:04:28.072008 | orchestrator | Monday 09 March 2026 01:02:30 +0000 (0:00:02.341) 0:00:54.548 ********** 2026-03-09 01:04:28.072014 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:04:28.072021 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:04:28.072028 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:04:28.072035 | orchestrator | 2026-03-09 01:04:28.072087 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-03-09 01:04:28.072096 | orchestrator | Monday 09 March 2026 01:02:31 +0000 (0:00:01.151) 0:00:55.699 ********** 2026-03-09 01:04:28.072103 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:04:28.072110 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:04:28.072116 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:04:28.072123 | orchestrator | 2026-03-09 01:04:28.072129 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-03-09 01:04:28.072136 | orchestrator | Monday 09 March 2026 01:02:31 +0000 (0:00:00.475) 0:00:56.175 ********** 2026-03-09 01:04:28.072143 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:28.072150 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:04:28.072163 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:04:28.072169 | orchestrator | 2026-03-09 01:04:28.072176 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-03-09 01:04:28.072183 | orchestrator | Monday 09 March 2026 01:02:32 +0000 (0:00:00.658) 0:00:56.834 ********** 2026-03-09 01:04:28.072189 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:04:28.072196 | orchestrator | 2026-03-09 01:04:28.072203 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-03-09 01:04:28.072209 | orchestrator | Monday 09 March 2026 01:02:48 +0000 (0:00:15.835) 0:01:12.670 ********** 2026-03-09 01:04:28.072216 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:04:28.072223 | orchestrator | 2026-03-09 01:04:28.072230 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-09 01:04:28.072237 | orchestrator | Monday 09 March 2026 01:03:00 +0000 (0:00:11.889) 0:01:24.559 ********** 2026-03-09 01:04:28.072243 | orchestrator | 2026-03-09 01:04:28.072250 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-09 01:04:28.072257 | orchestrator | Monday 09 March 2026 01:03:00 +0000 (0:00:00.072) 0:01:24.632 ********** 2026-03-09 01:04:28.072264 | orchestrator | 2026-03-09 01:04:28.072270 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-09 01:04:28.072283 | orchestrator | Monday 09 March 2026 01:03:00 +0000 (0:00:00.076) 0:01:24.708 ********** 2026-03-09 01:04:28.072290 | orchestrator | 2026-03-09 01:04:28.072296 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-03-09 01:04:28.072303 | orchestrator | Monday 09 March 2026 01:03:00 +0000 (0:00:00.064) 0:01:24.773 ********** 2026-03-09 01:04:28.072310 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:04:28.072317 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:04:28.072323 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:04:28.072330 | orchestrator | 2026-03-09 01:04:28.072337 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-03-09 01:04:28.072343 | orchestrator | Monday 09 March 2026 01:03:26 +0000 (0:00:25.418) 0:01:50.191 ********** 2026-03-09 01:04:28.072350 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:04:28.072357 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:04:28.072364 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:04:28.072370 | orchestrator | 2026-03-09 01:04:28.072377 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-03-09 01:04:28.072384 | orchestrator | Monday 09 March 2026 01:03:37 +0000 (0:00:11.281) 0:02:01.472 ********** 2026-03-09 01:04:28.072390 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:04:28.072397 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:04:28.072404 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:04:28.072411 | orchestrator | 2026-03-09 01:04:28.072417 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-09 01:04:28.072424 | orchestrator | Monday 09 March 2026 01:03:48 +0000 (0:00:11.637) 0:02:13.109 ********** 2026-03-09 01:04:28.072431 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:04:28.072437 | orchestrator | 2026-03-09 01:04:28.072444 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-03-09 01:04:28.072451 | orchestrator | Monday 09 March 2026 01:03:49 +0000 (0:00:00.706) 0:02:13.816 ********** 2026-03-09 01:04:28.072458 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:04:28.072469 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:04:28.072476 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:04:28.072483 | orchestrator | 2026-03-09 01:04:28.072490 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-03-09 01:04:28.072496 | orchestrator | Monday 09 March 2026 01:03:51 +0000 (0:00:01.433) 0:02:15.249 ********** 2026-03-09 01:04:28.072503 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:04:28.072510 | orchestrator | 2026-03-09 01:04:28.072516 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-03-09 01:04:28.072530 | orchestrator | Monday 09 March 2026 01:03:52 +0000 (0:00:01.798) 0:02:17.048 ********** 2026-03-09 01:04:28.072537 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-03-09 01:04:28.072544 | orchestrator | 2026-03-09 01:04:28.072551 | orchestrator | TASK [service-ks-register : keystone | Creating/deleting services] ************* 2026-03-09 01:04:28.072557 | orchestrator | Monday 09 March 2026 01:04:06 +0000 (0:00:13.777) 0:02:30.826 ********** 2026-03-09 01:04:28.072565 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-03-09 01:04:28.072571 | orchestrator | 2026-03-09 01:04:28.072578 | orchestrator | TASK [service-ks-register : keystone | Creating/deleting endpoints] ************ 2026-03-09 01:04:28.072585 | orchestrator | Monday 09 March 2026 01:04:11 +0000 (0:00:05.228) 0:02:36.055 ********** 2026-03-09 01:04:28.072591 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-03-09 01:04:28.072598 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-03-09 01:04:28.072607 | orchestrator | 2026-03-09 01:04:28.072615 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-03-09 01:04:28.072623 | orchestrator | Monday 09 March 2026 01:04:19 +0000 (0:00:08.001) 0:02:44.057 ********** 2026-03-09 01:04:28.072631 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:28.072639 | orchestrator | 2026-03-09 01:04:28.072647 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-03-09 01:04:28.072655 | orchestrator | Monday 09 March 2026 01:04:20 +0000 (0:00:00.339) 0:02:44.396 ********** 2026-03-09 01:04:28.072664 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:28.072672 | orchestrator | 2026-03-09 01:04:28.072680 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-03-09 01:04:28.072688 | orchestrator | Monday 09 March 2026 01:04:20 +0000 (0:00:00.349) 0:02:44.746 ********** 2026-03-09 01:04:28.072696 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:28.072703 | orchestrator | 2026-03-09 01:04:28.072711 | orchestrator | TASK [service-ks-register : keystone | Granting/revoking user roles] *********** 2026-03-09 01:04:28.072719 | orchestrator | Monday 09 March 2026 01:04:20 +0000 (0:00:00.385) 0:02:45.131 ********** 2026-03-09 01:04:28.072727 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:28.072735 | orchestrator | 2026-03-09 01:04:28.072743 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-03-09 01:04:28.072751 | orchestrator | Monday 09 March 2026 01:04:21 +0000 (0:00:00.597) 0:02:45.729 ********** 2026-03-09 01:04:28.072759 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:04:28.072767 | orchestrator | 2026-03-09 01:04:28.072776 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-09 01:04:28.072784 | orchestrator | Monday 09 March 2026 01:04:25 +0000 (0:00:03.924) 0:02:49.653 ********** 2026-03-09 01:04:28.072792 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:28.072800 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:04:28.072808 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:04:28.072816 | orchestrator | 2026-03-09 01:04:28.072824 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:04:28.072833 | orchestrator | testbed-node-0 : ok=34  changed=20  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-03-09 01:04:28.072845 | orchestrator | testbed-node-1 : ok=23  changed=13  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-09 01:04:28.072854 | orchestrator | testbed-node-2 : ok=23  changed=13  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-09 01:04:28.072863 | orchestrator | 2026-03-09 01:04:28.072871 | orchestrator | 2026-03-09 01:04:28.072879 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:04:28.072887 | orchestrator | Monday 09 March 2026 01:04:26 +0000 (0:00:00.618) 0:02:50.271 ********** 2026-03-09 01:04:28.072900 | orchestrator | =============================================================================== 2026-03-09 01:04:28.072909 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 25.42s 2026-03-09 01:04:28.072917 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.84s 2026-03-09 01:04:28.072926 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 13.78s 2026-03-09 01:04:28.072934 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.89s 2026-03-09 01:04:28.072942 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.64s 2026-03-09 01:04:28.072950 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 11.28s 2026-03-09 01:04:28.072959 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.70s 2026-03-09 01:04:28.072968 | orchestrator | service-ks-register : keystone | Creating/deleting endpoints ------------ 8.00s 2026-03-09 01:04:28.072976 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 6.43s 2026-03-09 01:04:28.072983 | orchestrator | service-ks-register : keystone | Creating/deleting services ------------- 5.23s 2026-03-09 01:04:28.072991 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 4.13s 2026-03-09 01:04:28.073001 | orchestrator | keystone : Creating default user role ----------------------------------- 3.92s 2026-03-09 01:04:28.073008 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.73s 2026-03-09 01:04:28.073015 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.40s 2026-03-09 01:04:28.073021 | orchestrator | service-check-containers : keystone | Check containers ------------------ 2.69s 2026-03-09 01:04:28.073028 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.38s 2026-03-09 01:04:28.073035 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.34s 2026-03-09 01:04:28.073057 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.03s 2026-03-09 01:04:28.073069 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 1.84s 2026-03-09 01:04:28.073080 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.83s 2026-03-09 01:04:28.073091 | orchestrator | 2026-03-09 01:04:28 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:04:28.073103 | orchestrator | 2026-03-09 01:04:28 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:04:28.073115 | orchestrator | 2026-03-09 01:04:28 | INFO  | Task 100aac98-7798-4cb7-9af7-309e963bd8d4 is in state STARTED 2026-03-09 01:04:28.073474 | orchestrator | 2026-03-09 01:04:28 | INFO  | Task 025bb8f3-fc8d-48de-aba6-d18d493faca7 is in state STARTED 2026-03-09 01:04:28.073566 | orchestrator | 2026-03-09 01:04:28 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:31.119425 | orchestrator | 2026-03-09 01:04:31 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:04:31.119965 | orchestrator | 2026-03-09 01:04:31 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:04:31.122536 | orchestrator | 2026-03-09 01:04:31 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:04:31.122681 | orchestrator | 2026-03-09 01:04:31 | INFO  | Task 100aac98-7798-4cb7-9af7-309e963bd8d4 is in state STARTED 2026-03-09 01:04:31.124555 | orchestrator | 2026-03-09 01:04:31 | INFO  | Task 025bb8f3-fc8d-48de-aba6-d18d493faca7 is in state STARTED 2026-03-09 01:04:31.126519 | orchestrator | 2026-03-09 01:04:31 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:34.222345 | orchestrator | 2026-03-09 01:04:34 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:04:34.222419 | orchestrator | 2026-03-09 01:04:34 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:04:34.224038 | orchestrator | 2026-03-09 01:04:34 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:04:34.227147 | orchestrator | 2026-03-09 01:04:34 | INFO  | Task 100aac98-7798-4cb7-9af7-309e963bd8d4 is in state STARTED 2026-03-09 01:04:34.227733 | orchestrator | 2026-03-09 01:04:34 | INFO  | Task 025bb8f3-fc8d-48de-aba6-d18d493faca7 is in state STARTED 2026-03-09 01:04:34.227761 | orchestrator | 2026-03-09 01:04:34 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:37.282832 | orchestrator | 2026-03-09 01:04:37 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:04:37.285521 | orchestrator | 2026-03-09 01:04:37 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:04:37.287031 | orchestrator | 2026-03-09 01:04:37 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:04:37.288941 | orchestrator | 2026-03-09 01:04:37 | INFO  | Task 100aac98-7798-4cb7-9af7-309e963bd8d4 is in state STARTED 2026-03-09 01:04:37.290641 | orchestrator | 2026-03-09 01:04:37 | INFO  | Task 025bb8f3-fc8d-48de-aba6-d18d493faca7 is in state STARTED 2026-03-09 01:04:37.291285 | orchestrator | 2026-03-09 01:04:37 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:40.358116 | orchestrator | 2026-03-09 01:04:40 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:04:40.359177 | orchestrator | 2026-03-09 01:04:40 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:04:40.361923 | orchestrator | 2026-03-09 01:04:40 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:04:40.363715 | orchestrator | 2026-03-09 01:04:40 | INFO  | Task 100aac98-7798-4cb7-9af7-309e963bd8d4 is in state STARTED 2026-03-09 01:04:40.366288 | orchestrator | 2026-03-09 01:04:40 | INFO  | Task 025bb8f3-fc8d-48de-aba6-d18d493faca7 is in state STARTED 2026-03-09 01:04:40.366570 | orchestrator | 2026-03-09 01:04:40 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:43.401457 | orchestrator | 2026-03-09 01:04:43 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:04:43.401952 | orchestrator | 2026-03-09 01:04:43 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:04:43.402878 | orchestrator | 2026-03-09 01:04:43 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:04:43.403933 | orchestrator | 2026-03-09 01:04:43 | INFO  | Task 100aac98-7798-4cb7-9af7-309e963bd8d4 is in state STARTED 2026-03-09 01:04:43.404652 | orchestrator | 2026-03-09 01:04:43 | INFO  | Task 025bb8f3-fc8d-48de-aba6-d18d493faca7 is in state STARTED 2026-03-09 01:04:43.404683 | orchestrator | 2026-03-09 01:04:43 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:46.440494 | orchestrator | 2026-03-09 01:04:46 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:04:46.442199 | orchestrator | 2026-03-09 01:04:46 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:04:46.442997 | orchestrator | 2026-03-09 01:04:46 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:04:46.445294 | orchestrator | 2026-03-09 01:04:46 | INFO  | Task 100aac98-7798-4cb7-9af7-309e963bd8d4 is in state STARTED 2026-03-09 01:04:46.447438 | orchestrator | 2026-03-09 01:04:46 | INFO  | Task 025bb8f3-fc8d-48de-aba6-d18d493faca7 is in state STARTED 2026-03-09 01:04:46.447496 | orchestrator | 2026-03-09 01:04:46 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:49.498004 | orchestrator | 2026-03-09 01:04:49 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:04:49.499446 | orchestrator | 2026-03-09 01:04:49 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:04:49.500130 | orchestrator | 2026-03-09 01:04:49 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:04:49.501632 | orchestrator | 2026-03-09 01:04:49 | INFO  | Task 100aac98-7798-4cb7-9af7-309e963bd8d4 is in state STARTED 2026-03-09 01:04:49.503479 | orchestrator | 2026-03-09 01:04:49 | INFO  | Task 025bb8f3-fc8d-48de-aba6-d18d493faca7 is in state STARTED 2026-03-09 01:04:49.503533 | orchestrator | 2026-03-09 01:04:49 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:52.536619 | orchestrator | 2026-03-09 01:04:52 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:04:52.537022 | orchestrator | 2026-03-09 01:04:52 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:04:52.538237 | orchestrator | 2026-03-09 01:04:52 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:04:52.538853 | orchestrator | 2026-03-09 01:04:52 | INFO  | Task 100aac98-7798-4cb7-9af7-309e963bd8d4 is in state SUCCESS 2026-03-09 01:04:52.539511 | orchestrator | 2026-03-09 01:04:52 | INFO  | Task 025bb8f3-fc8d-48de-aba6-d18d493faca7 is in state STARTED 2026-03-09 01:04:52.539659 | orchestrator | 2026-03-09 01:04:52 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:55.585663 | orchestrator | 2026-03-09 01:04:55 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:04:55.588180 | orchestrator | 2026-03-09 01:04:55 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:04:55.590140 | orchestrator | 2026-03-09 01:04:55 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:04:55.592447 | orchestrator | 2026-03-09 01:04:55 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:04:55.594242 | orchestrator | 2026-03-09 01:04:55 | INFO  | Task 025bb8f3-fc8d-48de-aba6-d18d493faca7 is in state STARTED 2026-03-09 01:04:55.594293 | orchestrator | 2026-03-09 01:04:55 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:58.632270 | orchestrator | 2026-03-09 01:04:58 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:04:58.632377 | orchestrator | 2026-03-09 01:04:58 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:04:58.632717 | orchestrator | 2026-03-09 01:04:58 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:04:58.634592 | orchestrator | 2026-03-09 01:04:58 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:04:58.635840 | orchestrator | 2026-03-09 01:04:58 | INFO  | Task 025bb8f3-fc8d-48de-aba6-d18d493faca7 is in state STARTED 2026-03-09 01:04:58.635886 | orchestrator | 2026-03-09 01:04:58 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:01.663037 | orchestrator | 2026-03-09 01:05:01 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:05:01.664417 | orchestrator | 2026-03-09 01:05:01 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:05:01.666329 | orchestrator | 2026-03-09 01:05:01 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:05:01.667563 | orchestrator | 2026-03-09 01:05:01 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:05:01.668862 | orchestrator | 2026-03-09 01:05:01 | INFO  | Task 025bb8f3-fc8d-48de-aba6-d18d493faca7 is in state STARTED 2026-03-09 01:05:01.668913 | orchestrator | 2026-03-09 01:05:01 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:04.718265 | orchestrator | 2026-03-09 01:05:04 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:05:04.719517 | orchestrator | 2026-03-09 01:05:04 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:05:04.720577 | orchestrator | 2026-03-09 01:05:04 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:05:04.721923 | orchestrator | 2026-03-09 01:05:04 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:05:04.723372 | orchestrator | 2026-03-09 01:05:04 | INFO  | Task 025bb8f3-fc8d-48de-aba6-d18d493faca7 is in state STARTED 2026-03-09 01:05:04.723585 | orchestrator | 2026-03-09 01:05:04 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:07.771236 | orchestrator | 2026-03-09 01:05:07 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:05:07.771964 | orchestrator | 2026-03-09 01:05:07 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:05:07.772726 | orchestrator | 2026-03-09 01:05:07 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:05:07.773658 | orchestrator | 2026-03-09 01:05:07 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:05:07.774792 | orchestrator | 2026-03-09 01:05:07 | INFO  | Task 025bb8f3-fc8d-48de-aba6-d18d493faca7 is in state STARTED 2026-03-09 01:05:07.774848 | orchestrator | 2026-03-09 01:05:07 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:10.821768 | orchestrator | 2026-03-09 01:05:10 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:05:10.822349 | orchestrator | 2026-03-09 01:05:10 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:05:10.823375 | orchestrator | 2026-03-09 01:05:10 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:05:10.824089 | orchestrator | 2026-03-09 01:05:10 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:05:10.825198 | orchestrator | 2026-03-09 01:05:10 | INFO  | Task 025bb8f3-fc8d-48de-aba6-d18d493faca7 is in state STARTED 2026-03-09 01:05:10.825249 | orchestrator | 2026-03-09 01:05:10 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:13.855420 | orchestrator | 2026-03-09 01:05:13 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:05:13.856453 | orchestrator | 2026-03-09 01:05:13 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:05:13.857866 | orchestrator | 2026-03-09 01:05:13 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:05:13.859170 | orchestrator | 2026-03-09 01:05:13 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:05:13.860095 | orchestrator | 2026-03-09 01:05:13 | INFO  | Task 025bb8f3-fc8d-48de-aba6-d18d493faca7 is in state STARTED 2026-03-09 01:05:13.860236 | orchestrator | 2026-03-09 01:05:13 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:16.894420 | orchestrator | 2026-03-09 01:05:16 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:05:16.896279 | orchestrator | 2026-03-09 01:05:16 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:05:16.898279 | orchestrator | 2026-03-09 01:05:16 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:05:16.899674 | orchestrator | 2026-03-09 01:05:16 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:05:16.901432 | orchestrator | 2026-03-09 01:05:16 | INFO  | Task 025bb8f3-fc8d-48de-aba6-d18d493faca7 is in state STARTED 2026-03-09 01:05:16.901580 | orchestrator | 2026-03-09 01:05:16 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:19.958367 | orchestrator | 2026-03-09 01:05:19 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:05:19.959083 | orchestrator | 2026-03-09 01:05:19 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:05:19.961003 | orchestrator | 2026-03-09 01:05:19 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:05:19.961462 | orchestrator | 2026-03-09 01:05:19 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:05:19.962846 | orchestrator | 2026-03-09 01:05:19 | INFO  | Task 025bb8f3-fc8d-48de-aba6-d18d493faca7 is in state STARTED 2026-03-09 01:05:19.964392 | orchestrator | 2026-03-09 01:05:19 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:22.998468 | orchestrator | 2026-03-09 01:05:22 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:05:22.998870 | orchestrator | 2026-03-09 01:05:22 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:05:23.000195 | orchestrator | 2026-03-09 01:05:22 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:05:23.001769 | orchestrator | 2026-03-09 01:05:22 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:05:23.002769 | orchestrator | 2026-03-09 01:05:23 | INFO  | Task 025bb8f3-fc8d-48de-aba6-d18d493faca7 is in state STARTED 2026-03-09 01:05:23.002896 | orchestrator | 2026-03-09 01:05:23 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:26.053624 | orchestrator | 2026-03-09 01:05:26 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:05:26.054402 | orchestrator | 2026-03-09 01:05:26 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:05:26.055349 | orchestrator | 2026-03-09 01:05:26 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:05:26.056323 | orchestrator | 2026-03-09 01:05:26 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:05:26.057432 | orchestrator | 2026-03-09 01:05:26 | INFO  | Task 025bb8f3-fc8d-48de-aba6-d18d493faca7 is in state STARTED 2026-03-09 01:05:26.057479 | orchestrator | 2026-03-09 01:05:26 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:29.185691 | orchestrator | 2026-03-09 01:05:29 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:05:29.185820 | orchestrator | 2026-03-09 01:05:29 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:05:29.185838 | orchestrator | 2026-03-09 01:05:29 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:05:29.185851 | orchestrator | 2026-03-09 01:05:29 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:05:29.185862 | orchestrator | 2026-03-09 01:05:29 | INFO  | Task 025bb8f3-fc8d-48de-aba6-d18d493faca7 is in state STARTED 2026-03-09 01:05:29.185874 | orchestrator | 2026-03-09 01:05:29 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:32.198527 | orchestrator | 2026-03-09 01:05:32.198650 | orchestrator | 2026-03-09 01:05:32.198706 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:05:32.198725 | orchestrator | 2026-03-09 01:05:32.198736 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:05:32.198745 | orchestrator | Monday 09 March 2026 01:04:10 +0000 (0:00:00.304) 0:00:00.304 ********** 2026-03-09 01:05:32.198754 | orchestrator | ok: [testbed-manager] 2026-03-09 01:05:32.198765 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:05:32.198773 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:05:32.198782 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:05:32.198791 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:05:32.198799 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:05:32.198808 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:05:32.198816 | orchestrator | 2026-03-09 01:05:32.198825 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:05:32.198834 | orchestrator | Monday 09 March 2026 01:04:12 +0000 (0:00:01.994) 0:00:02.298 ********** 2026-03-09 01:05:32.198843 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-03-09 01:05:32.198852 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-03-09 01:05:32.198861 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-03-09 01:05:32.198870 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-03-09 01:05:32.198879 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-03-09 01:05:32.198888 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-03-09 01:05:32.198896 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-03-09 01:05:32.198905 | orchestrator | 2026-03-09 01:05:32.198914 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-09 01:05:32.198922 | orchestrator | 2026-03-09 01:05:32.198946 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-03-09 01:05:32.198955 | orchestrator | Monday 09 March 2026 01:04:14 +0000 (0:00:01.624) 0:00:03.922 ********** 2026-03-09 01:05:32.198965 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:05:32.198975 | orchestrator | 2026-03-09 01:05:32.198984 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating/deleting services] ************* 2026-03-09 01:05:32.198993 | orchestrator | Monday 09 March 2026 01:04:16 +0000 (0:00:01.835) 0:00:05.757 ********** 2026-03-09 01:05:32.199002 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-03-09 01:05:32.199010 | orchestrator | 2026-03-09 01:05:32.199019 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating/deleting endpoints] ************ 2026-03-09 01:05:32.199027 | orchestrator | Monday 09 March 2026 01:04:20 +0000 (0:00:04.362) 0:00:10.120 ********** 2026-03-09 01:05:32.199037 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-03-09 01:05:32.199047 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-03-09 01:05:32.199056 | orchestrator | 2026-03-09 01:05:32.199065 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-03-09 01:05:32.199073 | orchestrator | Monday 09 March 2026 01:04:29 +0000 (0:00:09.546) 0:00:19.666 ********** 2026-03-09 01:05:32.199082 | orchestrator | ok: [testbed-manager] => (item=service) 2026-03-09 01:05:32.199091 | orchestrator | 2026-03-09 01:05:32.199099 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-03-09 01:05:32.199108 | orchestrator | Monday 09 March 2026 01:04:34 +0000 (0:00:04.158) 0:00:23.825 ********** 2026-03-09 01:05:32.199117 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-03-09 01:05:32.199155 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-09 01:05:32.199165 | orchestrator | 2026-03-09 01:05:32.199174 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-03-09 01:05:32.199191 | orchestrator | Monday 09 March 2026 01:04:38 +0000 (0:00:04.283) 0:00:28.108 ********** 2026-03-09 01:05:32.199200 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-03-09 01:05:32.199209 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-03-09 01:05:32.199219 | orchestrator | 2026-03-09 01:05:32.199227 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting/revoking user roles] *********** 2026-03-09 01:05:32.199236 | orchestrator | Monday 09 March 2026 01:04:46 +0000 (0:00:07.666) 0:00:35.775 ********** 2026-03-09 01:05:32.199245 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-03-09 01:05:32.199254 | orchestrator | 2026-03-09 01:05:32.199262 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:05:32.199271 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:05:32.199281 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:05:32.199290 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:05:32.199299 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:05:32.199308 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:05:32.199336 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:05:32.199345 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:05:32.199354 | orchestrator | 2026-03-09 01:05:32.199363 | orchestrator | 2026-03-09 01:05:32.199372 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:05:32.199381 | orchestrator | Monday 09 March 2026 01:04:51 +0000 (0:00:05.908) 0:00:41.684 ********** 2026-03-09 01:05:32.199389 | orchestrator | =============================================================================== 2026-03-09 01:05:32.199398 | orchestrator | service-ks-register : ceph-rgw | Creating/deleting endpoints ------------ 9.55s 2026-03-09 01:05:32.199407 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 7.67s 2026-03-09 01:05:32.199416 | orchestrator | service-ks-register : ceph-rgw | Granting/revoking user roles ----------- 5.91s 2026-03-09 01:05:32.199425 | orchestrator | service-ks-register : ceph-rgw | Creating/deleting services ------------- 4.36s 2026-03-09 01:05:32.199433 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.28s 2026-03-09 01:05:32.199442 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 4.16s 2026-03-09 01:05:32.199451 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.99s 2026-03-09 01:05:32.199459 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.84s 2026-03-09 01:05:32.199468 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.62s 2026-03-09 01:05:32.199477 | orchestrator | 2026-03-09 01:05:32.199485 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-09 01:05:32.199500 | orchestrator | 2.16.14 2026-03-09 01:05:32.199510 | orchestrator | 2026-03-09 01:05:32.199519 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-03-09 01:05:32.199527 | orchestrator | 2026-03-09 01:05:32.199536 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-03-09 01:05:32.199545 | orchestrator | Monday 09 March 2026 01:04:01 +0000 (0:00:00.276) 0:00:00.277 ********** 2026-03-09 01:05:32.199554 | orchestrator | changed: [testbed-manager] 2026-03-09 01:05:32.199562 | orchestrator | 2026-03-09 01:05:32.199571 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-03-09 01:05:32.199586 | orchestrator | Monday 09 March 2026 01:04:02 +0000 (0:00:01.458) 0:00:01.735 ********** 2026-03-09 01:05:32.199595 | orchestrator | changed: [testbed-manager] 2026-03-09 01:05:32.199604 | orchestrator | 2026-03-09 01:05:32.199612 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-03-09 01:05:32.199621 | orchestrator | Monday 09 March 2026 01:04:03 +0000 (0:00:01.147) 0:00:02.882 ********** 2026-03-09 01:05:32.199631 | orchestrator | changed: [testbed-manager] 2026-03-09 01:05:32.199646 | orchestrator | 2026-03-09 01:05:32.199661 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-03-09 01:05:32.199675 | orchestrator | Monday 09 March 2026 01:04:05 +0000 (0:00:01.210) 0:00:04.092 ********** 2026-03-09 01:05:32.199691 | orchestrator | changed: [testbed-manager] 2026-03-09 01:05:32.199704 | orchestrator | 2026-03-09 01:05:32.199718 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-03-09 01:05:32.199732 | orchestrator | Monday 09 March 2026 01:04:06 +0000 (0:00:01.225) 0:00:05.318 ********** 2026-03-09 01:05:32.199747 | orchestrator | changed: [testbed-manager] 2026-03-09 01:05:32.199759 | orchestrator | 2026-03-09 01:05:32.199774 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-03-09 01:05:32.199787 | orchestrator | Monday 09 March 2026 01:04:07 +0000 (0:00:01.603) 0:00:06.922 ********** 2026-03-09 01:05:32.199802 | orchestrator | changed: [testbed-manager] 2026-03-09 01:05:32.199817 | orchestrator | 2026-03-09 01:05:32.199831 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-03-09 01:05:32.199845 | orchestrator | Monday 09 March 2026 01:04:08 +0000 (0:00:01.131) 0:00:08.053 ********** 2026-03-09 01:05:32.199859 | orchestrator | changed: [testbed-manager] 2026-03-09 01:05:32.199873 | orchestrator | 2026-03-09 01:05:32.199885 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-03-09 01:05:32.199900 | orchestrator | Monday 09 March 2026 01:04:11 +0000 (0:00:02.077) 0:00:10.130 ********** 2026-03-09 01:05:32.199915 | orchestrator | changed: [testbed-manager] 2026-03-09 01:05:32.199930 | orchestrator | 2026-03-09 01:05:32.199944 | orchestrator | TASK [Create admin user] ******************************************************* 2026-03-09 01:05:32.199959 | orchestrator | Monday 09 March 2026 01:04:12 +0000 (0:00:01.191) 0:00:11.322 ********** 2026-03-09 01:05:32.199975 | orchestrator | changed: [testbed-manager] 2026-03-09 01:05:32.199989 | orchestrator | 2026-03-09 01:05:32.200004 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-03-09 01:05:32.200014 | orchestrator | Monday 09 March 2026 01:05:06 +0000 (0:00:54.258) 0:01:05.581 ********** 2026-03-09 01:05:32.200023 | orchestrator | skipping: [testbed-manager] 2026-03-09 01:05:32.200032 | orchestrator | 2026-03-09 01:05:32.200040 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-09 01:05:32.200049 | orchestrator | 2026-03-09 01:05:32.200058 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-09 01:05:32.200067 | orchestrator | Monday 09 March 2026 01:05:06 +0000 (0:00:00.304) 0:01:05.885 ********** 2026-03-09 01:05:32.200075 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:05:32.200084 | orchestrator | 2026-03-09 01:05:32.200093 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-09 01:05:32.200101 | orchestrator | 2026-03-09 01:05:32.200110 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-09 01:05:32.200118 | orchestrator | Monday 09 March 2026 01:05:08 +0000 (0:00:01.947) 0:01:07.832 ********** 2026-03-09 01:05:32.200182 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:05:32.200192 | orchestrator | 2026-03-09 01:05:32.200201 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-09 01:05:32.200209 | orchestrator | 2026-03-09 01:05:32.200218 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-09 01:05:32.200238 | orchestrator | Monday 09 March 2026 01:05:20 +0000 (0:00:11.314) 0:01:19.147 ********** 2026-03-09 01:05:32.200248 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:05:32.200267 | orchestrator | 2026-03-09 01:05:32.200276 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:05:32.200285 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 01:05:32.200295 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:05:32.200303 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:05:32.200312 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:05:32.200324 | orchestrator | 2026-03-09 01:05:32.200340 | orchestrator | 2026-03-09 01:05:32.200354 | orchestrator | 2026-03-09 01:05:32.200369 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:05:32.200385 | orchestrator | Monday 09 March 2026 01:05:31 +0000 (0:00:11.420) 0:01:30.568 ********** 2026-03-09 01:05:32.200401 | orchestrator | =============================================================================== 2026-03-09 01:05:32.200416 | orchestrator | Create admin user ------------------------------------------------------ 54.26s 2026-03-09 01:05:32.200432 | orchestrator | Restart ceph manager service ------------------------------------------- 24.68s 2026-03-09 01:05:32.200455 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.08s 2026-03-09 01:05:32.200470 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.60s 2026-03-09 01:05:32.200486 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.46s 2026-03-09 01:05:32.200499 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.23s 2026-03-09 01:05:32.200512 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.21s 2026-03-09 01:05:32.200526 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.19s 2026-03-09 01:05:32.200539 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.15s 2026-03-09 01:05:32.200554 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.13s 2026-03-09 01:05:32.200567 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.30s 2026-03-09 01:05:32.200582 | orchestrator | 2026-03-09 01:05:32 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:05:32.200599 | orchestrator | 2026-03-09 01:05:32 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:05:32.200614 | orchestrator | 2026-03-09 01:05:32 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:05:32.200629 | orchestrator | 2026-03-09 01:05:32 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:05:32.200644 | orchestrator | 2026-03-09 01:05:32 | INFO  | Task 025bb8f3-fc8d-48de-aba6-d18d493faca7 is in state SUCCESS 2026-03-09 01:05:32.200659 | orchestrator | 2026-03-09 01:05:32 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:35.186720 | orchestrator | 2026-03-09 01:05:35 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:05:35.190241 | orchestrator | 2026-03-09 01:05:35 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:05:35.191670 | orchestrator | 2026-03-09 01:05:35 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:05:35.192515 | orchestrator | 2026-03-09 01:05:35 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:05:35.192553 | orchestrator | 2026-03-09 01:05:35 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:38.222550 | orchestrator | 2026-03-09 01:05:38 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:05:38.223334 | orchestrator | 2026-03-09 01:05:38 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:05:38.224432 | orchestrator | 2026-03-09 01:05:38 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:05:38.225532 | orchestrator | 2026-03-09 01:05:38 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:05:38.225651 | orchestrator | 2026-03-09 01:05:38 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:41.267492 | orchestrator | 2026-03-09 01:05:41 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:05:41.267590 | orchestrator | 2026-03-09 01:05:41 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:05:41.270564 | orchestrator | 2026-03-09 01:05:41 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:05:41.270998 | orchestrator | 2026-03-09 01:05:41 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:05:41.271024 | orchestrator | 2026-03-09 01:05:41 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:44.305687 | orchestrator | 2026-03-09 01:05:44 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:05:44.305800 | orchestrator | 2026-03-09 01:05:44 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:05:44.309114 | orchestrator | 2026-03-09 01:05:44 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:05:44.310411 | orchestrator | 2026-03-09 01:05:44 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:05:44.310449 | orchestrator | 2026-03-09 01:05:44 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:47.353851 | orchestrator | 2026-03-09 01:05:47 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:05:47.354722 | orchestrator | 2026-03-09 01:05:47 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:05:47.356971 | orchestrator | 2026-03-09 01:05:47 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:05:47.357118 | orchestrator | 2026-03-09 01:05:47 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:05:47.357133 | orchestrator | 2026-03-09 01:05:47 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:50.425268 | orchestrator | 2026-03-09 01:05:50 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:05:50.426256 | orchestrator | 2026-03-09 01:05:50 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:05:50.427264 | orchestrator | 2026-03-09 01:05:50 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:05:50.429852 | orchestrator | 2026-03-09 01:05:50 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:05:50.429900 | orchestrator | 2026-03-09 01:05:50 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:53.478542 | orchestrator | 2026-03-09 01:05:53 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:05:53.478644 | orchestrator | 2026-03-09 01:05:53 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:05:53.479422 | orchestrator | 2026-03-09 01:05:53 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:05:53.479836 | orchestrator | 2026-03-09 01:05:53 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:05:53.479905 | orchestrator | 2026-03-09 01:05:53 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:56.515952 | orchestrator | 2026-03-09 01:05:56 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:05:56.516031 | orchestrator | 2026-03-09 01:05:56 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:05:56.516044 | orchestrator | 2026-03-09 01:05:56 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:05:56.516927 | orchestrator | 2026-03-09 01:05:56 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:05:56.516962 | orchestrator | 2026-03-09 01:05:56 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:59.563500 | orchestrator | 2026-03-09 01:05:59 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:05:59.564322 | orchestrator | 2026-03-09 01:05:59 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:05:59.565318 | orchestrator | 2026-03-09 01:05:59 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:05:59.566100 | orchestrator | 2026-03-09 01:05:59 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:05:59.566124 | orchestrator | 2026-03-09 01:05:59 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:02.589561 | orchestrator | 2026-03-09 01:06:02 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:06:02.589851 | orchestrator | 2026-03-09 01:06:02 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:06:02.590520 | orchestrator | 2026-03-09 01:06:02 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:06:02.591358 | orchestrator | 2026-03-09 01:06:02 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:06:02.591407 | orchestrator | 2026-03-09 01:06:02 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:05.625341 | orchestrator | 2026-03-09 01:06:05 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:06:05.625421 | orchestrator | 2026-03-09 01:06:05 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:06:05.625990 | orchestrator | 2026-03-09 01:06:05 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:06:05.626709 | orchestrator | 2026-03-09 01:06:05 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:06:05.626738 | orchestrator | 2026-03-09 01:06:05 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:08.658627 | orchestrator | 2026-03-09 01:06:08 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:06:08.660708 | orchestrator | 2026-03-09 01:06:08 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:06:08.662383 | orchestrator | 2026-03-09 01:06:08 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:06:08.663747 | orchestrator | 2026-03-09 01:06:08 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:06:08.663835 | orchestrator | 2026-03-09 01:06:08 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:11.692092 | orchestrator | 2026-03-09 01:06:11 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:06:11.693030 | orchestrator | 2026-03-09 01:06:11 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:06:11.694103 | orchestrator | 2026-03-09 01:06:11 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:06:11.695308 | orchestrator | 2026-03-09 01:06:11 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:06:11.695329 | orchestrator | 2026-03-09 01:06:11 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:14.742655 | orchestrator | 2026-03-09 01:06:14 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:06:14.743990 | orchestrator | 2026-03-09 01:06:14 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:06:14.744982 | orchestrator | 2026-03-09 01:06:14 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:06:14.747012 | orchestrator | 2026-03-09 01:06:14 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:06:14.747051 | orchestrator | 2026-03-09 01:06:14 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:17.790756 | orchestrator | 2026-03-09 01:06:17 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:06:17.793491 | orchestrator | 2026-03-09 01:06:17 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:06:17.794375 | orchestrator | 2026-03-09 01:06:17 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:06:17.799385 | orchestrator | 2026-03-09 01:06:17 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:06:17.799460 | orchestrator | 2026-03-09 01:06:17 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:20.917516 | orchestrator | 2026-03-09 01:06:20 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:06:20.917755 | orchestrator | 2026-03-09 01:06:20 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:06:20.918836 | orchestrator | 2026-03-09 01:06:20 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:06:20.920546 | orchestrator | 2026-03-09 01:06:20 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:06:20.920593 | orchestrator | 2026-03-09 01:06:20 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:23.952234 | orchestrator | 2026-03-09 01:06:23 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:06:23.953144 | orchestrator | 2026-03-09 01:06:23 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:06:23.954587 | orchestrator | 2026-03-09 01:06:23 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:06:23.956083 | orchestrator | 2026-03-09 01:06:23 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:06:23.956174 | orchestrator | 2026-03-09 01:06:23 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:27.000169 | orchestrator | 2026-03-09 01:06:26 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:06:27.000853 | orchestrator | 2026-03-09 01:06:26 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:06:27.003324 | orchestrator | 2026-03-09 01:06:27 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:06:27.006160 | orchestrator | 2026-03-09 01:06:27 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:06:27.006281 | orchestrator | 2026-03-09 01:06:27 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:30.046559 | orchestrator | 2026-03-09 01:06:30 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:06:30.047739 | orchestrator | 2026-03-09 01:06:30 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:06:30.050270 | orchestrator | 2026-03-09 01:06:30 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:06:30.051936 | orchestrator | 2026-03-09 01:06:30 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:06:30.052954 | orchestrator | 2026-03-09 01:06:30 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:33.097416 | orchestrator | 2026-03-09 01:06:33 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:06:33.100555 | orchestrator | 2026-03-09 01:06:33 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:06:33.100617 | orchestrator | 2026-03-09 01:06:33 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:06:33.102681 | orchestrator | 2026-03-09 01:06:33 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:06:33.102822 | orchestrator | 2026-03-09 01:06:33 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:36.146548 | orchestrator | 2026-03-09 01:06:36 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:06:36.147124 | orchestrator | 2026-03-09 01:06:36 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:06:36.148437 | orchestrator | 2026-03-09 01:06:36 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:06:36.149308 | orchestrator | 2026-03-09 01:06:36 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:06:36.149493 | orchestrator | 2026-03-09 01:06:36 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:39.189921 | orchestrator | 2026-03-09 01:06:39 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:06:39.191961 | orchestrator | 2026-03-09 01:06:39 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:06:39.197106 | orchestrator | 2026-03-09 01:06:39 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:06:39.199799 | orchestrator | 2026-03-09 01:06:39 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:06:39.200467 | orchestrator | 2026-03-09 01:06:39 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:42.253721 | orchestrator | 2026-03-09 01:06:42 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:06:42.254785 | orchestrator | 2026-03-09 01:06:42 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:06:42.256599 | orchestrator | 2026-03-09 01:06:42 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:06:42.258157 | orchestrator | 2026-03-09 01:06:42 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:06:42.258369 | orchestrator | 2026-03-09 01:06:42 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:45.301349 | orchestrator | 2026-03-09 01:06:45 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:06:45.307734 | orchestrator | 2026-03-09 01:06:45 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:06:45.310185 | orchestrator | 2026-03-09 01:06:45 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:06:45.311435 | orchestrator | 2026-03-09 01:06:45 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:06:45.311636 | orchestrator | 2026-03-09 01:06:45 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:48.352701 | orchestrator | 2026-03-09 01:06:48 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:06:48.355469 | orchestrator | 2026-03-09 01:06:48 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:06:48.356973 | orchestrator | 2026-03-09 01:06:48 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:06:48.359104 | orchestrator | 2026-03-09 01:06:48 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:06:48.359167 | orchestrator | 2026-03-09 01:06:48 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:51.403001 | orchestrator | 2026-03-09 01:06:51 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:06:51.404315 | orchestrator | 2026-03-09 01:06:51 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:06:51.404768 | orchestrator | 2026-03-09 01:06:51 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:06:51.405958 | orchestrator | 2026-03-09 01:06:51 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:06:51.405994 | orchestrator | 2026-03-09 01:06:51 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:54.453752 | orchestrator | 2026-03-09 01:06:54 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:06:54.455349 | orchestrator | 2026-03-09 01:06:54 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:06:54.457356 | orchestrator | 2026-03-09 01:06:54 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:06:54.461708 | orchestrator | 2026-03-09 01:06:54 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:06:54.461769 | orchestrator | 2026-03-09 01:06:54 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:57.510076 | orchestrator | 2026-03-09 01:06:57 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:06:57.512102 | orchestrator | 2026-03-09 01:06:57 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:06:57.513307 | orchestrator | 2026-03-09 01:06:57 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:06:57.514350 | orchestrator | 2026-03-09 01:06:57 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:06:57.514389 | orchestrator | 2026-03-09 01:06:57 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:00.573400 | orchestrator | 2026-03-09 01:07:00 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:07:00.585067 | orchestrator | 2026-03-09 01:07:00 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:07:00.586551 | orchestrator | 2026-03-09 01:07:00 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:07:00.588277 | orchestrator | 2026-03-09 01:07:00 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:07:00.588446 | orchestrator | 2026-03-09 01:07:00 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:03.648793 | orchestrator | 2026-03-09 01:07:03 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:07:03.648942 | orchestrator | 2026-03-09 01:07:03 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:07:03.648975 | orchestrator | 2026-03-09 01:07:03 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:07:03.651502 | orchestrator | 2026-03-09 01:07:03 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:07:03.651550 | orchestrator | 2026-03-09 01:07:03 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:06.694292 | orchestrator | 2026-03-09 01:07:06 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:07:06.695753 | orchestrator | 2026-03-09 01:07:06 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:07:06.695904 | orchestrator | 2026-03-09 01:07:06 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:07:06.699529 | orchestrator | 2026-03-09 01:07:06 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:07:06.699603 | orchestrator | 2026-03-09 01:07:06 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:09.745773 | orchestrator | 2026-03-09 01:07:09 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:07:09.753448 | orchestrator | 2026-03-09 01:07:09 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:07:09.754535 | orchestrator | 2026-03-09 01:07:09 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:07:09.758361 | orchestrator | 2026-03-09 01:07:09 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:07:09.758429 | orchestrator | 2026-03-09 01:07:09 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:12.941358 | orchestrator | 2026-03-09 01:07:12 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:07:12.942239 | orchestrator | 2026-03-09 01:07:12 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:07:12.943692 | orchestrator | 2026-03-09 01:07:12 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:07:12.944917 | orchestrator | 2026-03-09 01:07:12 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:07:12.944967 | orchestrator | 2026-03-09 01:07:12 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:16.047016 | orchestrator | 2026-03-09 01:07:16 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:07:16.047242 | orchestrator | 2026-03-09 01:07:16 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:07:16.048433 | orchestrator | 2026-03-09 01:07:16 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:07:16.049479 | orchestrator | 2026-03-09 01:07:16 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:07:16.049527 | orchestrator | 2026-03-09 01:07:16 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:19.094833 | orchestrator | 2026-03-09 01:07:19 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:07:19.095679 | orchestrator | 2026-03-09 01:07:19 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:07:19.096415 | orchestrator | 2026-03-09 01:07:19 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:07:19.097708 | orchestrator | 2026-03-09 01:07:19 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:07:19.097743 | orchestrator | 2026-03-09 01:07:19 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:22.137735 | orchestrator | 2026-03-09 01:07:22 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:07:22.137825 | orchestrator | 2026-03-09 01:07:22 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:07:22.141505 | orchestrator | 2026-03-09 01:07:22 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:07:22.141559 | orchestrator | 2026-03-09 01:07:22 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:07:22.141568 | orchestrator | 2026-03-09 01:07:22 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:25.173854 | orchestrator | 2026-03-09 01:07:25 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:07:25.174431 | orchestrator | 2026-03-09 01:07:25 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:07:25.175105 | orchestrator | 2026-03-09 01:07:25 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:07:25.175989 | orchestrator | 2026-03-09 01:07:25 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state STARTED 2026-03-09 01:07:25.176021 | orchestrator | 2026-03-09 01:07:25 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:28.229029 | orchestrator | 2026-03-09 01:07:28 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:07:28.231030 | orchestrator | 2026-03-09 01:07:28 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:07:28.232087 | orchestrator | 2026-03-09 01:07:28 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:07:28.234119 | orchestrator | 2026-03-09 01:07:28 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:07:28.238429 | orchestrator | 2026-03-09 01:07:28 | INFO  | Task 2a9434d5-78df-497c-b6b3-e3ff6440c5bf is in state SUCCESS 2026-03-09 01:07:28.241158 | orchestrator | 2026-03-09 01:07:28.241248 | orchestrator | 2026-03-09 01:07:28.241290 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:07:28.241305 | orchestrator | 2026-03-09 01:07:28.241316 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:07:28.241329 | orchestrator | Monday 09 March 2026 01:04:01 +0000 (0:00:00.302) 0:00:00.302 ********** 2026-03-09 01:07:28.241337 | orchestrator | ok: [testbed-manager] 2026-03-09 01:07:28.241345 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:07:28.241353 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:07:28.241360 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:07:28.241367 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:07:28.241374 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:07:28.241381 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:07:28.241387 | orchestrator | 2026-03-09 01:07:28.241394 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:07:28.241401 | orchestrator | Monday 09 March 2026 01:04:02 +0000 (0:00:01.016) 0:00:01.319 ********** 2026-03-09 01:07:28.241409 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-03-09 01:07:28.241416 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-03-09 01:07:28.241423 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-03-09 01:07:28.241430 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-03-09 01:07:28.241436 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-03-09 01:07:28.241514 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-03-09 01:07:28.241524 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-03-09 01:07:28.241540 | orchestrator | 2026-03-09 01:07:28.241547 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-03-09 01:07:28.241563 | orchestrator | 2026-03-09 01:07:28.241570 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-09 01:07:28.241615 | orchestrator | Monday 09 March 2026 01:04:02 +0000 (0:00:00.837) 0:00:02.156 ********** 2026-03-09 01:07:28.241625 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:07:28.241633 | orchestrator | 2026-03-09 01:07:28.241640 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-03-09 01:07:28.241647 | orchestrator | Monday 09 March 2026 01:04:04 +0000 (0:00:01.697) 0:00:03.853 ********** 2026-03-09 01:07:28.241658 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-09 01:07:28.241669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:07:28.241678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:07:28.241705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:07:28.241719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:07:28.241731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:07:28.241863 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:07:28.241905 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:07:28.241915 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:07:28.241923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:07:28.241936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:07:28.241957 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:07:28.241972 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:07:28.241992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:07:28.242010 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:07:28.242080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:07:28.242095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:07:28.242118 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:07:28.242143 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-09 01:07:28.242154 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:07:28.242179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:07:28.242187 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:07:28.242194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:07:28.242201 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:07:28.242208 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-09 01:07:28.242215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:07:28.242229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:07:28.242241 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-09 01:07:28.242283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:07:28.242296 | orchestrator | 2026-03-09 01:07:28.242304 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-09 01:07:28.242311 | orchestrator | Monday 09 March 2026 01:04:09 +0000 (0:00:04.455) 0:00:08.309 ********** 2026-03-09 01:07:28.242318 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:07:28.242326 | orchestrator | 2026-03-09 01:07:28.242333 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-03-09 01:07:28.242339 | orchestrator | Monday 09 March 2026 01:04:10 +0000 (0:00:01.638) 0:00:09.947 ********** 2026-03-09 01:07:28.242347 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-09 01:07:28.242355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:07:28.242362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:07:28.242382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:07:28.242389 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:07:28.242400 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:07:28.242407 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:07:28.242414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:07:28.242422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:07:28.242429 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:07:28.242436 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:07:28.242453 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:07:28.242461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:07:28.242534 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:07:28.242543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:07:28.242550 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:07:28.242557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:07:28.242564 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-09 01:07:28.242583 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-09 01:07:28.242602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:07:28.242614 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-09 01:07:28.242631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:07:28.242644 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:07:28.242652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:07:28.242659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:07:28.242718 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:07:28.242726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:07:28.242733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:07:28.242744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:07:28.242751 | orchestrator | 2026-03-09 01:07:28.242758 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-03-09 01:07:28.242766 | orchestrator | Monday 09 March 2026 01:04:18 +0000 (0:00:07.658) 0:00:17.605 ********** 2026-03-09 01:07:28.242773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:07:28.242780 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-03-09 01:07:28.242793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:07:28.242805 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:07:28.242812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:07:28.242822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:07:28.242833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:07:28.242844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:07:28.242855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:07:28.242873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:07:28.242884 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:28.242897 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:07:28.242917 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:07:28.242932 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:07:28.242940 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:07:28.242947 | orchestrator | skipping: [testbed-manager] 2026-03-09 01:07:28.242954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:07:28.242962 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:07:28.242974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:07:28.242981 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-09 01:07:28.242988 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:28.243000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:07:28.243007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:07:28.243018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:07:28.243025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:07:28.243032 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:28.243040 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:07:28.243052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:07:28.243059 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:28.243066 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:07:28.243078 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:07:28.243086 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-09 01:07:28.243093 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:28.243104 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:07:28.243111 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-09 01:07:28.243118 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:28.243125 | orchestrator | 2026-03-09 01:07:28.243132 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-03-09 01:07:28.243139 | orchestrator | Monday 09 March 2026 01:04:21 +0000 (0:00:03.263) 0:00:20.869 ********** 2026-03-09 01:07:28.243151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:07:28.243158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:07:28.243165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:07:28.243172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:07:28.243185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:07:28.243197 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-03-09 01:07:28.243205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:07:28.243217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:07:28.243224 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:07:28.243231 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:07:28.243239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:07:28.243249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:07:28.243257 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:28.243330 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:07:28.243346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:07:28.243358 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:07:28.243365 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:07:28.243372 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-09 01:07:28.243379 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:28.243386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:07:28.243397 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:28.243416 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:07:28.243430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:07:28.243449 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-09 01:07:28.243465 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:28.243490 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:07:28.243504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:07:28.243517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:07:28.243529 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:07:28.243542 | orchestrator | skipping: [testbed-manager] 2026-03-09 01:07:28.243972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:07:28.243994 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:28.244002 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:07:28.244015 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:07:28.244032 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-09 01:07:28.244039 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:28.244046 | orchestrator | 2026-03-09 01:07:28.244054 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-03-09 01:07:28.244061 | orchestrator | Monday 09 March 2026 01:04:25 +0000 (0:00:03.840) 0:00:24.709 ********** 2026-03-09 01:07:28.244069 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-09 01:07:28.244078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:07:28.244093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:07:28.244101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:07:28.244108 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:07:28.244125 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:07:28.244132 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:07:28.244140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:07:28.244148 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:07:28.244155 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:07:28.244166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:07:28.244174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:07:28.244186 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:07:28.244197 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:07:28.244205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:07:28.244212 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:07:28.244219 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-09 01:07:28.244227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:07:28.244237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:07:28.244245 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-09 01:07:28.244258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:07:28.244398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:07:28.244509 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-09 01:07:28.244518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:07:28.244526 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:07:28.244543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:07:28.244560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:07:28.244571 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:07:28.244578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:07:28.244586 | orchestrator | 2026-03-09 01:07:28.244593 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-03-09 01:07:28.244601 | orchestrator | Monday 09 March 2026 01:04:33 +0000 (0:00:07.632) 0:00:32.341 ********** 2026-03-09 01:07:28.244608 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-09 01:07:28.244615 | orchestrator | 2026-03-09 01:07:28.244622 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-03-09 01:07:28.244629 | orchestrator | Monday 09 March 2026 01:04:34 +0000 (0:00:01.565) 0:00:33.906 ********** 2026-03-09 01:07:28.244636 | orchestrator | skipping: [testbed-manager] 2026-03-09 01:07:28.244643 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:28.244650 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:28.244656 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:28.244664 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:28.244671 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:28.244678 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:28.244685 | orchestrator | 2026-03-09 01:07:28.244692 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-03-09 01:07:28.244699 | orchestrator | Monday 09 March 2026 01:04:35 +0000 (0:00:00.834) 0:00:34.741 ********** 2026-03-09 01:07:28.244706 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-09 01:07:28.244713 | orchestrator | 2026-03-09 01:07:28.244719 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-03-09 01:07:28.244749 | orchestrator | Monday 09 March 2026 01:04:36 +0000 (0:00:00.860) 0:00:35.602 ********** 2026-03-09 01:07:28.244757 | orchestrator | [WARNING]: Skipped 2026-03-09 01:07:28.244766 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:07:28.244773 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-03-09 01:07:28.244780 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:07:28.244837 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-03-09 01:07:28.244848 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-09 01:07:28.244855 | orchestrator | [WARNING]: Skipped 2026-03-09 01:07:28.244875 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:07:28.244882 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-03-09 01:07:28.244888 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:07:28.244898 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-03-09 01:07:28.244909 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-09 01:07:28.244919 | orchestrator | [WARNING]: Skipped 2026-03-09 01:07:28.244936 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:07:28.244949 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-03-09 01:07:28.244960 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:07:28.244971 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-03-09 01:07:28.244981 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-09 01:07:28.244992 | orchestrator | [WARNING]: Skipped 2026-03-09 01:07:28.245024 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:07:28.245035 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-03-09 01:07:28.245045 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:07:28.245056 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-03-09 01:07:28.245066 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 01:07:28.245077 | orchestrator | [WARNING]: Skipped 2026-03-09 01:07:28.245087 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:07:28.245115 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-03-09 01:07:28.245127 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:07:28.245149 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-03-09 01:07:28.245165 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-09 01:07:28.245172 | orchestrator | [WARNING]: Skipped 2026-03-09 01:07:28.245188 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:07:28.245198 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-03-09 01:07:28.245208 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:07:28.245219 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-03-09 01:07:28.245228 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-09 01:07:28.245238 | orchestrator | [WARNING]: Skipped 2026-03-09 01:07:28.245249 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:07:28.245260 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-03-09 01:07:28.245298 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:07:28.245309 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-03-09 01:07:28.245319 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-09 01:07:28.245329 | orchestrator | 2026-03-09 01:07:28.245376 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-03-09 01:07:28.245385 | orchestrator | Monday 09 March 2026 01:04:39 +0000 (0:00:02.644) 0:00:38.247 ********** 2026-03-09 01:07:28.245392 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-09 01:07:28.245399 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:28.245406 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-09 01:07:28.245412 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:28.245418 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-09 01:07:28.245425 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:28.245432 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-09 01:07:28.245446 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:28.245453 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-09 01:07:28.245459 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:28.245466 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-09 01:07:28.245472 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:28.245478 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-03-09 01:07:28.245485 | orchestrator | 2026-03-09 01:07:28.245491 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-03-09 01:07:28.245497 | orchestrator | Monday 09 March 2026 01:04:58 +0000 (0:00:19.901) 0:00:58.148 ********** 2026-03-09 01:07:28.245504 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-09 01:07:28.245510 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:28.245516 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-09 01:07:28.245523 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:28.245529 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-09 01:07:28.245535 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:28.245541 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-09 01:07:28.245548 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:28.245554 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-09 01:07:28.245560 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:28.245567 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-09 01:07:28.245573 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:28.245579 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-03-09 01:07:28.245585 | orchestrator | 2026-03-09 01:07:28.245592 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-03-09 01:07:28.245598 | orchestrator | Monday 09 March 2026 01:05:04 +0000 (0:00:05.385) 0:01:03.534 ********** 2026-03-09 01:07:28.245605 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-09 01:07:28.245613 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:28.245626 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-09 01:07:28.245633 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-03-09 01:07:28.245639 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:28.245645 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-09 01:07:28.245652 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:28.245658 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-09 01:07:28.245664 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:28.245671 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-09 01:07:28.245677 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:28.245683 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-09 01:07:28.245690 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:28.245700 | orchestrator | 2026-03-09 01:07:28.245707 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-03-09 01:07:28.245713 | orchestrator | Monday 09 March 2026 01:05:07 +0000 (0:00:03.626) 0:01:07.161 ********** 2026-03-09 01:07:28.245719 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-09 01:07:28.245725 | orchestrator | 2026-03-09 01:07:28.245731 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-03-09 01:07:28.245741 | orchestrator | Monday 09 March 2026 01:05:09 +0000 (0:00:01.074) 0:01:08.235 ********** 2026-03-09 01:07:28.245747 | orchestrator | skipping: [testbed-manager] 2026-03-09 01:07:28.245753 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:28.245759 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:28.245766 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:28.245772 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:28.245778 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:28.245784 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:28.245790 | orchestrator | 2026-03-09 01:07:28.245797 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-03-09 01:07:28.245803 | orchestrator | Monday 09 March 2026 01:05:09 +0000 (0:00:00.745) 0:01:08.980 ********** 2026-03-09 01:07:28.245809 | orchestrator | skipping: [testbed-manager] 2026-03-09 01:07:28.245815 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:28.245821 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:28.245828 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:28.245834 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:07:28.245844 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:07:28.245854 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:07:28.245865 | orchestrator | 2026-03-09 01:07:28.245875 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-03-09 01:07:28.245886 | orchestrator | Monday 09 March 2026 01:05:13 +0000 (0:00:03.928) 0:01:12.909 ********** 2026-03-09 01:07:28.245896 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-09 01:07:28.245907 | orchestrator | skipping: [testbed-manager] 2026-03-09 01:07:28.245916 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-09 01:07:28.245926 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:28.245937 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-09 01:07:28.245946 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:28.245956 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-09 01:07:28.245966 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:28.245976 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-09 01:07:28.245985 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:28.245995 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-09 01:07:28.246006 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:28.246134 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-09 01:07:28.246154 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:28.246164 | orchestrator | 2026-03-09 01:07:28.246175 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-03-09 01:07:28.246185 | orchestrator | Monday 09 March 2026 01:05:16 +0000 (0:00:02.661) 0:01:15.571 ********** 2026-03-09 01:07:28.246195 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-09 01:07:28.246206 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:28.246217 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-09 01:07:28.246228 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:28.246248 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-03-09 01:07:28.246260 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-09 01:07:28.246301 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:28.246307 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-09 01:07:28.246314 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:28.246328 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-09 01:07:28.246335 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:28.246341 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-09 01:07:28.246348 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:28.246355 | orchestrator | 2026-03-09 01:07:28.246361 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-03-09 01:07:28.246368 | orchestrator | Monday 09 March 2026 01:05:20 +0000 (0:00:03.803) 0:01:19.375 ********** 2026-03-09 01:07:28.246374 | orchestrator | [WARNING]: Skipped 2026-03-09 01:07:28.246381 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-03-09 01:07:28.246387 | orchestrator | due to this access issue: 2026-03-09 01:07:28.246394 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-03-09 01:07:28.246400 | orchestrator | not a directory 2026-03-09 01:07:28.246407 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-09 01:07:28.246413 | orchestrator | 2026-03-09 01:07:28.246419 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-03-09 01:07:28.246426 | orchestrator | Monday 09 March 2026 01:05:22 +0000 (0:00:02.224) 0:01:21.599 ********** 2026-03-09 01:07:28.246432 | orchestrator | skipping: [testbed-manager] 2026-03-09 01:07:28.246438 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:28.246444 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:28.246451 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:28.246457 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:28.246463 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:28.246469 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:28.246476 | orchestrator | 2026-03-09 01:07:28.246482 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-03-09 01:07:28.246493 | orchestrator | Monday 09 March 2026 01:05:23 +0000 (0:00:01.424) 0:01:23.023 ********** 2026-03-09 01:07:28.246500 | orchestrator | skipping: [testbed-manager] 2026-03-09 01:07:28.246506 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:28.246512 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:28.246519 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:28.246525 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:28.246531 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:28.246537 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:28.246544 | orchestrator | 2026-03-09 01:07:28.246550 | orchestrator | TASK [service-check-containers : prometheus | Check containers] **************** 2026-03-09 01:07:28.246556 | orchestrator | Monday 09 March 2026 01:05:25 +0000 (0:00:01.325) 0:01:24.349 ********** 2026-03-09 01:07:28.246564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:07:28.246572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:07:28.246584 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:07:28.246599 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-09 01:07:28.246607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:07:28.246617 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:07:28.246625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:07:28.246632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:07:28.246644 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:07:28.246652 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:07:28.246658 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:07:28.246671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:07:28.246678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:07:28.246688 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:07:28.246695 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-09 01:07:28.246702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:07:28.246713 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:07:28.246721 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:07:28.246728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:07:28.246739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:07:28.246745 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-09 01:07:28.246755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:07:28.246762 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:07:28.246774 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-09 01:07:28.246780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:07:28.246787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:07:28.246800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:07:28.246807 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:07:28.246820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:07:28.246828 | orchestrator | 2026-03-09 01:07:28.246834 | orchestrator | TASK [service-check-containers : prometheus | Notify handlers to restart containers] *** 2026-03-09 01:07:28.246845 | orchestrator | Monday 09 March 2026 01:05:32 +0000 (0:00:07.427) 0:01:31.776 ********** 2026-03-09 01:07:28.246852 | orchestrator | changed: [testbed-manager] => { 2026-03-09 01:07:28.246858 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:07:28.246865 | orchestrator | } 2026-03-09 01:07:28.246871 | orchestrator | changed: [testbed-node-0] => { 2026-03-09 01:07:28.246877 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:07:28.246883 | orchestrator | } 2026-03-09 01:07:28.246889 | orchestrator | changed: [testbed-node-1] => { 2026-03-09 01:07:28.246896 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:07:28.246902 | orchestrator | } 2026-03-09 01:07:28.246909 | orchestrator | changed: [testbed-node-2] => { 2026-03-09 01:07:28.246915 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:07:28.246921 | orchestrator | } 2026-03-09 01:07:28.246927 | orchestrator | changed: [testbed-node-3] => { 2026-03-09 01:07:28.246934 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:07:28.246940 | orchestrator | } 2026-03-09 01:07:28.246946 | orchestrator | changed: [testbed-node-4] => { 2026-03-09 01:07:28.246952 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:07:28.246958 | orchestrator | } 2026-03-09 01:07:28.246964 | orchestrator | changed: [testbed-node-5] => { 2026-03-09 01:07:28.246971 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:07:28.246977 | orchestrator | } 2026-03-09 01:07:28.246983 | orchestrator | 2026-03-09 01:07:28.246989 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-09 01:07:28.246996 | orchestrator | Monday 09 March 2026 01:05:34 +0000 (0:00:01.660) 0:01:33.437 ********** 2026-03-09 01:07:28.247002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:07:28.247010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:07:28.247017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:07:28.247027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:07:28.247035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:07:28.247054 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-03-09 01:07:28.247061 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:07:28.247068 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:07:28.247075 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:28.247087 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:07:28.247094 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:07:28.247105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:07:28.247116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:07:28.247123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:07:28.247129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:07:28.247136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:07:28.247143 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:07:28.247154 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:07:28.247160 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-09 01:07:28.247172 | orchestrator | skipping: [testbed-manager] 2026-03-09 01:07:28.247178 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:28.247185 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:28.247195 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:07:28.247202 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:07:28.247208 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-09 01:07:28.247215 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:28.247221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:07:28.247228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:07:28.247235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:07:28.247247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:07:28.247260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:07:28.247287 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:28.247298 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:07:28.247305 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:07:28.247312 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-09 01:07:28.247318 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:28.247325 | orchestrator | 2026-03-09 01:07:28.247331 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-03-09 01:07:28.247338 | orchestrator | Monday 09 March 2026 01:05:37 +0000 (0:00:03.705) 0:01:37.142 ********** 2026-03-09 01:07:28.247344 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-09 01:07:28.247351 | orchestrator | skipping: [testbed-manager] 2026-03-09 01:07:28.247358 | orchestrator | 2026-03-09 01:07:28.247364 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-09 01:07:28.247370 | orchestrator | Monday 09 March 2026 01:05:39 +0000 (0:00:01.170) 0:01:38.312 ********** 2026-03-09 01:07:28.247377 | orchestrator | 2026-03-09 01:07:28.247384 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-09 01:07:28.247390 | orchestrator | Monday 09 March 2026 01:05:39 +0000 (0:00:00.081) 0:01:38.393 ********** 2026-03-09 01:07:28.247396 | orchestrator | 2026-03-09 01:07:28.247402 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-09 01:07:28.247408 | orchestrator | Monday 09 March 2026 01:05:39 +0000 (0:00:00.102) 0:01:38.495 ********** 2026-03-09 01:07:28.247415 | orchestrator | 2026-03-09 01:07:28.247421 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-09 01:07:28.247434 | orchestrator | Monday 09 March 2026 01:05:39 +0000 (0:00:00.069) 0:01:38.565 ********** 2026-03-09 01:07:28.247440 | orchestrator | 2026-03-09 01:07:28.247446 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-09 01:07:28.247453 | orchestrator | Monday 09 March 2026 01:05:39 +0000 (0:00:00.071) 0:01:38.637 ********** 2026-03-09 01:07:28.247459 | orchestrator | 2026-03-09 01:07:28.247465 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-09 01:07:28.247471 | orchestrator | Monday 09 March 2026 01:05:39 +0000 (0:00:00.068) 0:01:38.705 ********** 2026-03-09 01:07:28.247477 | orchestrator | 2026-03-09 01:07:28.247484 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-09 01:07:28.247490 | orchestrator | Monday 09 March 2026 01:05:39 +0000 (0:00:00.081) 0:01:38.787 ********** 2026-03-09 01:07:28.247496 | orchestrator | 2026-03-09 01:07:28.247503 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-03-09 01:07:28.247512 | orchestrator | Monday 09 March 2026 01:05:39 +0000 (0:00:00.366) 0:01:39.154 ********** 2026-03-09 01:07:28.247519 | orchestrator | changed: [testbed-manager] 2026-03-09 01:07:28.247526 | orchestrator | 2026-03-09 01:07:28.247532 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-03-09 01:07:28.247539 | orchestrator | Monday 09 March 2026 01:05:56 +0000 (0:00:16.666) 0:01:55.820 ********** 2026-03-09 01:07:28.247545 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:07:28.247552 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:07:28.247558 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:07:28.247565 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:07:28.247571 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:07:28.247578 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:07:28.247584 | orchestrator | changed: [testbed-manager] 2026-03-09 01:07:28.247590 | orchestrator | 2026-03-09 01:07:28.247597 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-03-09 01:07:28.247603 | orchestrator | Monday 09 March 2026 01:06:12 +0000 (0:00:15.615) 0:02:11.435 ********** 2026-03-09 01:07:28.247609 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:07:28.247616 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:07:28.247622 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:07:28.247628 | orchestrator | 2026-03-09 01:07:28.247634 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-03-09 01:07:28.247641 | orchestrator | Monday 09 March 2026 01:06:21 +0000 (0:00:09.130) 0:02:20.565 ********** 2026-03-09 01:07:28.247647 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:07:28.247653 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:07:28.247659 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:07:28.247665 | orchestrator | 2026-03-09 01:07:28.247671 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-03-09 01:07:28.247678 | orchestrator | Monday 09 March 2026 01:06:32 +0000 (0:00:11.357) 0:02:31.923 ********** 2026-03-09 01:07:28.247684 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:07:28.247690 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:07:28.247701 | orchestrator | changed: [testbed-manager] 2026-03-09 01:07:28.247707 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:07:28.247714 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:07:28.247720 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:07:28.247726 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:07:28.247733 | orchestrator | 2026-03-09 01:07:28.247739 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-03-09 01:07:28.247745 | orchestrator | Monday 09 March 2026 01:06:47 +0000 (0:00:14.574) 0:02:46.498 ********** 2026-03-09 01:07:28.247752 | orchestrator | changed: [testbed-manager] 2026-03-09 01:07:28.247758 | orchestrator | 2026-03-09 01:07:28.247765 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-03-09 01:07:28.247771 | orchestrator | Monday 09 March 2026 01:07:00 +0000 (0:00:12.947) 0:02:59.445 ********** 2026-03-09 01:07:28.247777 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:07:28.247788 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:07:28.247794 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:07:28.247800 | orchestrator | 2026-03-09 01:07:28.247807 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-03-09 01:07:28.247813 | orchestrator | Monday 09 March 2026 01:07:06 +0000 (0:00:05.802) 0:03:05.247 ********** 2026-03-09 01:07:28.247819 | orchestrator | changed: [testbed-manager] 2026-03-09 01:07:28.247826 | orchestrator | 2026-03-09 01:07:28.247832 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-03-09 01:07:28.247838 | orchestrator | Monday 09 March 2026 01:07:12 +0000 (0:00:06.221) 0:03:11.469 ********** 2026-03-09 01:07:28.247845 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:07:28.247851 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:07:28.247857 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:07:28.247863 | orchestrator | 2026-03-09 01:07:28.247869 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:07:28.247876 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-03-09 01:07:28.247883 | orchestrator | testbed-node-0 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-09 01:07:28.247889 | orchestrator | testbed-node-1 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-09 01:07:28.247896 | orchestrator | testbed-node-2 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-09 01:07:28.247902 | orchestrator | testbed-node-3 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-09 01:07:28.247908 | orchestrator | testbed-node-4 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-09 01:07:28.247914 | orchestrator | testbed-node-5 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-09 01:07:28.247921 | orchestrator | 2026-03-09 01:07:28.247927 | orchestrator | 2026-03-09 01:07:28.247933 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:07:28.247940 | orchestrator | Monday 09 March 2026 01:07:25 +0000 (0:00:12.849) 0:03:24.319 ********** 2026-03-09 01:07:28.247946 | orchestrator | =============================================================================== 2026-03-09 01:07:28.247952 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 19.90s 2026-03-09 01:07:28.247959 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 16.67s 2026-03-09 01:07:28.247969 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 15.61s 2026-03-09 01:07:28.247975 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.57s 2026-03-09 01:07:28.247982 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 12.95s 2026-03-09 01:07:28.247988 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 12.85s 2026-03-09 01:07:28.247994 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 11.36s 2026-03-09 01:07:28.248000 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 9.13s 2026-03-09 01:07:28.248006 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 7.66s 2026-03-09 01:07:28.248012 | orchestrator | prometheus : Copying over config.json files ----------------------------- 7.63s 2026-03-09 01:07:28.248019 | orchestrator | service-check-containers : prometheus | Check containers ---------------- 7.43s 2026-03-09 01:07:28.248025 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 6.22s 2026-03-09 01:07:28.248036 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 5.80s 2026-03-09 01:07:28.248042 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 5.39s 2026-03-09 01:07:28.248049 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.46s 2026-03-09 01:07:28.248055 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.93s 2026-03-09 01:07:28.248061 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 3.84s 2026-03-09 01:07:28.248067 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 3.80s 2026-03-09 01:07:28.248077 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.71s 2026-03-09 01:07:28.248083 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 3.63s 2026-03-09 01:07:28.248090 | orchestrator | 2026-03-09 01:07:28 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:31.357948 | orchestrator | 2026-03-09 01:07:31 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:07:31.360456 | orchestrator | 2026-03-09 01:07:31 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:07:31.362576 | orchestrator | 2026-03-09 01:07:31 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:07:31.363609 | orchestrator | 2026-03-09 01:07:31 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:07:31.363666 | orchestrator | 2026-03-09 01:07:31 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:34.396071 | orchestrator | 2026-03-09 01:07:34 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:07:34.396416 | orchestrator | 2026-03-09 01:07:34 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:07:34.397336 | orchestrator | 2026-03-09 01:07:34 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:07:34.399213 | orchestrator | 2026-03-09 01:07:34 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:07:34.399335 | orchestrator | 2026-03-09 01:07:34 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:37.444131 | orchestrator | 2026-03-09 01:07:37 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:07:37.445196 | orchestrator | 2026-03-09 01:07:37 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:07:37.446195 | orchestrator | 2026-03-09 01:07:37 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:07:37.446876 | orchestrator | 2026-03-09 01:07:37 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:07:37.446903 | orchestrator | 2026-03-09 01:07:37 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:40.497968 | orchestrator | 2026-03-09 01:07:40 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:07:40.498408 | orchestrator | 2026-03-09 01:07:40 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:07:40.500080 | orchestrator | 2026-03-09 01:07:40 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:07:40.501703 | orchestrator | 2026-03-09 01:07:40 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:07:40.501757 | orchestrator | 2026-03-09 01:07:40 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:43.550721 | orchestrator | 2026-03-09 01:07:43 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:07:43.552393 | orchestrator | 2026-03-09 01:07:43 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:07:43.553847 | orchestrator | 2026-03-09 01:07:43 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:07:43.555320 | orchestrator | 2026-03-09 01:07:43 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:07:43.555359 | orchestrator | 2026-03-09 01:07:43 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:46.629397 | orchestrator | 2026-03-09 01:07:46 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:07:46.629702 | orchestrator | 2026-03-09 01:07:46 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:07:46.633469 | orchestrator | 2026-03-09 01:07:46 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:07:46.633559 | orchestrator | 2026-03-09 01:07:46 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:07:46.633576 | orchestrator | 2026-03-09 01:07:46 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:49.676524 | orchestrator | 2026-03-09 01:07:49 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:07:49.676620 | orchestrator | 2026-03-09 01:07:49 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:07:49.679082 | orchestrator | 2026-03-09 01:07:49 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:07:49.685635 | orchestrator | 2026-03-09 01:07:49 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:07:49.685687 | orchestrator | 2026-03-09 01:07:49 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:52.734990 | orchestrator | 2026-03-09 01:07:52 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:07:52.737927 | orchestrator | 2026-03-09 01:07:52 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:07:52.740693 | orchestrator | 2026-03-09 01:07:52 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:07:52.743604 | orchestrator | 2026-03-09 01:07:52 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state STARTED 2026-03-09 01:07:52.743639 | orchestrator | 2026-03-09 01:07:52 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:55.787855 | orchestrator | 2026-03-09 01:07:55 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:07:55.789969 | orchestrator | 2026-03-09 01:07:55 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:07:55.792009 | orchestrator | 2026-03-09 01:07:55 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:07:55.794151 | orchestrator | 2026-03-09 01:07:55 | INFO  | Task 2ee8f8a4-14c2-4674-baa9-8e8b4ddee888 is in state SUCCESS 2026-03-09 01:07:55.796111 | orchestrator | 2026-03-09 01:07:55.796167 | orchestrator | 2026-03-09 01:07:55.796178 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:07:55.796189 | orchestrator | 2026-03-09 01:07:55.796198 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:07:55.796208 | orchestrator | Monday 09 March 2026 01:04:10 +0000 (0:00:00.376) 0:00:00.376 ********** 2026-03-09 01:07:55.796217 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:07:55.796227 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:07:55.796236 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:07:55.796245 | orchestrator | 2026-03-09 01:07:55.796254 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:07:55.796263 | orchestrator | Monday 09 March 2026 01:04:11 +0000 (0:00:00.641) 0:00:01.017 ********** 2026-03-09 01:07:55.796612 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-03-09 01:07:55.796626 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-03-09 01:07:55.796635 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-03-09 01:07:55.796644 | orchestrator | 2026-03-09 01:07:55.796653 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-03-09 01:07:55.796661 | orchestrator | 2026-03-09 01:07:55.796671 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-09 01:07:55.796680 | orchestrator | Monday 09 March 2026 01:04:12 +0000 (0:00:01.447) 0:00:02.465 ********** 2026-03-09 01:07:55.796688 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:07:55.796698 | orchestrator | 2026-03-09 01:07:55.796707 | orchestrator | TASK [service-ks-register : glance | Creating/deleting services] *************** 2026-03-09 01:07:55.796716 | orchestrator | Monday 09 March 2026 01:04:14 +0000 (0:00:01.183) 0:00:03.648 ********** 2026-03-09 01:07:55.796725 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-03-09 01:07:55.796734 | orchestrator | 2026-03-09 01:07:55.796743 | orchestrator | TASK [service-ks-register : glance | Creating/deleting endpoints] ************** 2026-03-09 01:07:55.796752 | orchestrator | Monday 09 March 2026 01:04:18 +0000 (0:00:04.365) 0:00:08.014 ********** 2026-03-09 01:07:55.796760 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-03-09 01:07:55.796770 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-03-09 01:07:55.796778 | orchestrator | 2026-03-09 01:07:55.796787 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-03-09 01:07:55.796796 | orchestrator | Monday 09 March 2026 01:04:26 +0000 (0:00:08.031) 0:00:16.046 ********** 2026-03-09 01:07:55.796805 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-03-09 01:07:55.796814 | orchestrator | 2026-03-09 01:07:55.796823 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-03-09 01:07:55.796833 | orchestrator | Monday 09 March 2026 01:04:30 +0000 (0:00:03.630) 0:00:19.676 ********** 2026-03-09 01:07:55.796849 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-03-09 01:07:55.796865 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-09 01:07:55.796881 | orchestrator | 2026-03-09 01:07:55.796895 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-03-09 01:07:55.796910 | orchestrator | Monday 09 March 2026 01:04:33 +0000 (0:00:03.884) 0:00:23.561 ********** 2026-03-09 01:07:55.796924 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-09 01:07:55.796938 | orchestrator | 2026-03-09 01:07:55.796954 | orchestrator | TASK [service-ks-register : glance | Granting/revoking user roles] ************* 2026-03-09 01:07:55.796968 | orchestrator | Monday 09 March 2026 01:04:38 +0000 (0:00:04.147) 0:00:27.708 ********** 2026-03-09 01:07:55.796982 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-03-09 01:07:55.796998 | orchestrator | 2026-03-09 01:07:55.797013 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-03-09 01:07:55.797026 | orchestrator | Monday 09 March 2026 01:04:42 +0000 (0:00:04.364) 0:00:32.072 ********** 2026-03-09 01:07:55.797083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 01:07:55.797119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 01:07:55.797145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 01:07:55.797172 | orchestrator | 2026-03-09 01:07:55.797189 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-09 01:07:55.797207 | orchestrator | Monday 09 March 2026 01:04:46 +0000 (0:00:04.338) 0:00:36.411 ********** 2026-03-09 01:07:55.797237 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:07:55.797250 | orchestrator | 2026-03-09 01:07:55.797260 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-03-09 01:07:55.797271 | orchestrator | Monday 09 March 2026 01:04:47 +0000 (0:00:00.843) 0:00:37.254 ********** 2026-03-09 01:07:55.797281 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:07:55.797292 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:07:55.797367 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:07:55.797381 | orchestrator | 2026-03-09 01:07:55.797392 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-03-09 01:07:55.797403 | orchestrator | Monday 09 March 2026 01:04:52 +0000 (0:00:04.854) 0:00:42.109 ********** 2026-03-09 01:07:55.797415 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-03-09 01:07:55.797428 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-03-09 01:07:55.797438 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-03-09 01:07:55.797449 | orchestrator | 2026-03-09 01:07:55.797460 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-03-09 01:07:55.797470 | orchestrator | Monday 09 March 2026 01:04:54 +0000 (0:00:01.899) 0:00:44.008 ********** 2026-03-09 01:07:55.797480 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-03-09 01:07:55.797492 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-03-09 01:07:55.797503 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-03-09 01:07:55.797513 | orchestrator | 2026-03-09 01:07:55.797524 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-03-09 01:07:55.797536 | orchestrator | Monday 09 March 2026 01:04:55 +0000 (0:00:01.416) 0:00:45.424 ********** 2026-03-09 01:07:55.797546 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:07:55.797554 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:07:55.797563 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:07:55.797572 | orchestrator | 2026-03-09 01:07:55.797581 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-03-09 01:07:55.797590 | orchestrator | Monday 09 March 2026 01:04:56 +0000 (0:00:00.891) 0:00:46.316 ********** 2026-03-09 01:07:55.797599 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:55.797607 | orchestrator | 2026-03-09 01:07:55.797616 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-03-09 01:07:55.797625 | orchestrator | Monday 09 March 2026 01:04:56 +0000 (0:00:00.127) 0:00:46.444 ********** 2026-03-09 01:07:55.797634 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:55.797643 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:55.797651 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:55.797668 | orchestrator | 2026-03-09 01:07:55.797677 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-09 01:07:55.797685 | orchestrator | Monday 09 March 2026 01:04:57 +0000 (0:00:00.342) 0:00:46.787 ********** 2026-03-09 01:07:55.797694 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:07:55.797703 | orchestrator | 2026-03-09 01:07:55.797712 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-03-09 01:07:55.797720 | orchestrator | Monday 09 March 2026 01:04:57 +0000 (0:00:00.656) 0:00:47.443 ********** 2026-03-09 01:07:55.797744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 01:07:55.797756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 01:07:55.797776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 01:07:55.797786 | orchestrator | 2026-03-09 01:07:55.797795 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-03-09 01:07:55.797804 | orchestrator | Monday 09 March 2026 01:05:04 +0000 (0:00:06.771) 0:00:54.215 ********** 2026-03-09 01:07:55.797820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-09 01:07:55.797831 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:55.797845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-09 01:07:55.797861 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:55.797879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-09 01:07:55.797889 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:55.797898 | orchestrator | 2026-03-09 01:07:55.797907 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-03-09 01:07:55.797916 | orchestrator | Monday 09 March 2026 01:05:09 +0000 (0:00:05.174) 0:00:59.389 ********** 2026-03-09 01:07:55.797929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-09 01:07:55.797945 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:55.797961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-09 01:07:55.797971 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:55.797980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-09 01:07:55.797995 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:55.798004 | orchestrator | 2026-03-09 01:07:55.798061 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-03-09 01:07:55.798074 | orchestrator | Monday 09 March 2026 01:05:15 +0000 (0:00:06.092) 0:01:05.482 ********** 2026-03-09 01:07:55.798083 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:55.798092 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:55.798100 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:55.798109 | orchestrator | 2026-03-09 01:07:55.798118 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-03-09 01:07:55.798137 | orchestrator | Monday 09 March 2026 01:05:23 +0000 (0:00:07.191) 0:01:12.673 ********** 2026-03-09 01:07:55.798158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 01:07:55.798176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 01:07:55.798208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 01:07:55.798225 | orchestrator | 2026-03-09 01:07:55.798247 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-03-09 01:07:55.798262 | orchestrator | Monday 09 March 2026 01:05:31 +0000 (0:00:08.234) 0:01:20.908 ********** 2026-03-09 01:07:55.798277 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:07:55.798292 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:07:55.798334 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:07:55.798349 | orchestrator | 2026-03-09 01:07:55.798364 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-03-09 01:07:55.798378 | orchestrator | Monday 09 March 2026 01:05:39 +0000 (0:00:08.546) 0:01:29.455 ********** 2026-03-09 01:07:55.798392 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:55.798408 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:55.798421 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:55.798437 | orchestrator | 2026-03-09 01:07:55.798452 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-03-09 01:07:55.798467 | orchestrator | Monday 09 March 2026 01:05:44 +0000 (0:00:04.469) 0:01:33.924 ********** 2026-03-09 01:07:55.798488 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:55.798497 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:55.798506 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:55.798514 | orchestrator | 2026-03-09 01:07:55.798523 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-03-09 01:07:55.798532 | orchestrator | Monday 09 March 2026 01:05:50 +0000 (0:00:06.211) 0:01:40.136 ********** 2026-03-09 01:07:55.798541 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:55.798549 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:55.798558 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:55.798567 | orchestrator | 2026-03-09 01:07:55.798576 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-03-09 01:07:55.798584 | orchestrator | Monday 09 March 2026 01:05:56 +0000 (0:00:05.801) 0:01:45.937 ********** 2026-03-09 01:07:55.798593 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:55.798602 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:55.798610 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:55.798619 | orchestrator | 2026-03-09 01:07:55.798628 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-03-09 01:07:55.798637 | orchestrator | Monday 09 March 2026 01:05:56 +0000 (0:00:00.404) 0:01:46.342 ********** 2026-03-09 01:07:55.798646 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-09 01:07:55.798656 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:55.798665 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-09 01:07:55.798674 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:55.798682 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-09 01:07:55.798691 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:55.798700 | orchestrator | 2026-03-09 01:07:55.798708 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-03-09 01:07:55.798717 | orchestrator | Monday 09 March 2026 01:06:05 +0000 (0:00:08.975) 0:01:55.318 ********** 2026-03-09 01:07:55.798726 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:07:55.798734 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:07:55.798743 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:07:55.798751 | orchestrator | 2026-03-09 01:07:55.798760 | orchestrator | TASK [service-check-containers : glance | Check containers] ******************** 2026-03-09 01:07:55.798769 | orchestrator | Monday 09 March 2026 01:06:10 +0000 (0:00:05.135) 0:02:00.454 ********** 2026-03-09 01:07:55.798793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 01:07:55.798810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 01:07:55.798825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 01:07:55.798836 | orchestrator | 2026-03-09 01:07:55.798845 | orchestrator | TASK [service-check-containers : glance | Notify handlers to restart containers] *** 2026-03-09 01:07:55.798854 | orchestrator | Monday 09 March 2026 01:06:17 +0000 (0:00:06.688) 0:02:07.142 ********** 2026-03-09 01:07:55.798868 | orchestrator | changed: [testbed-node-0] => { 2026-03-09 01:07:55.798877 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:07:55.798887 | orchestrator | } 2026-03-09 01:07:55.798896 | orchestrator | changed: [testbed-node-1] => { 2026-03-09 01:07:55.798904 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:07:55.798913 | orchestrator | } 2026-03-09 01:07:55.798922 | orchestrator | changed: [testbed-node-2] => { 2026-03-09 01:07:55.798931 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:07:55.798939 | orchestrator | } 2026-03-09 01:07:55.798948 | orchestrator | 2026-03-09 01:07:55.798957 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-09 01:07:55.798971 | orchestrator | Monday 09 March 2026 01:06:18 +0000 (0:00:00.523) 0:02:07.666 ********** 2026-03-09 01:07:55.798980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-09 01:07:55.798990 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:55.799004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-09 01:07:55.799020 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:55.799036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-09 01:07:55.799046 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:55.799055 | orchestrator | 2026-03-09 01:07:55.799064 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-09 01:07:55.799073 | orchestrator | Monday 09 March 2026 01:06:24 +0000 (0:00:06.531) 0:02:14.198 ********** 2026-03-09 01:07:55.799081 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:55.799091 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:55.799099 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:55.799108 | orchestrator | 2026-03-09 01:07:55.799117 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-03-09 01:07:55.799126 | orchestrator | Monday 09 March 2026 01:06:25 +0000 (0:00:00.629) 0:02:14.828 ********** 2026-03-09 01:07:55.799135 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:07:55.799143 | orchestrator | 2026-03-09 01:07:55.799152 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-03-09 01:07:55.799161 | orchestrator | Monday 09 March 2026 01:06:27 +0000 (0:00:02.466) 0:02:17.294 ********** 2026-03-09 01:07:55.799170 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:07:55.799178 | orchestrator | 2026-03-09 01:07:55.799187 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-03-09 01:07:55.799196 | orchestrator | Monday 09 March 2026 01:06:29 +0000 (0:00:02.335) 0:02:19.629 ********** 2026-03-09 01:07:55.799205 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:07:55.799214 | orchestrator | 2026-03-09 01:07:55.799223 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-03-09 01:07:55.799232 | orchestrator | Monday 09 March 2026 01:06:31 +0000 (0:00:01.954) 0:02:21.584 ********** 2026-03-09 01:07:55.799240 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:07:55.799249 | orchestrator | 2026-03-09 01:07:55.799258 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-03-09 01:07:55.799272 | orchestrator | Monday 09 March 2026 01:07:05 +0000 (0:00:33.922) 0:02:55.506 ********** 2026-03-09 01:07:55.799281 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:07:55.799290 | orchestrator | 2026-03-09 01:07:55.799327 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-09 01:07:55.799343 | orchestrator | Monday 09 March 2026 01:07:08 +0000 (0:00:02.501) 0:02:58.008 ********** 2026-03-09 01:07:55.799359 | orchestrator | 2026-03-09 01:07:55.799375 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-09 01:07:55.799390 | orchestrator | Monday 09 March 2026 01:07:08 +0000 (0:00:00.085) 0:02:58.094 ********** 2026-03-09 01:07:55.799405 | orchestrator | 2026-03-09 01:07:55.799424 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-09 01:07:55.799434 | orchestrator | Monday 09 March 2026 01:07:08 +0000 (0:00:00.076) 0:02:58.171 ********** 2026-03-09 01:07:55.799443 | orchestrator | 2026-03-09 01:07:55.799452 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-03-09 01:07:55.799461 | orchestrator | Monday 09 March 2026 01:07:08 +0000 (0:00:00.077) 0:02:58.249 ********** 2026-03-09 01:07:55.799470 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:07:55.799479 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:07:55.799488 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:07:55.799497 | orchestrator | 2026-03-09 01:07:55.799505 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:07:55.799516 | orchestrator | testbed-node-0 : ok=28  changed=21  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-09 01:07:55.799525 | orchestrator | testbed-node-1 : ok=17  changed=11  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-09 01:07:55.799537 | orchestrator | testbed-node-2 : ok=17  changed=11  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-09 01:07:55.799552 | orchestrator | 2026-03-09 01:07:55.799564 | orchestrator | 2026-03-09 01:07:55.799588 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:07:55.799604 | orchestrator | Monday 09 March 2026 01:07:55 +0000 (0:00:46.706) 0:03:44.955 ********** 2026-03-09 01:07:55.799627 | orchestrator | =============================================================================== 2026-03-09 01:07:55.799641 | orchestrator | glance : Restart glance-api container ---------------------------------- 46.71s 2026-03-09 01:07:55.799655 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 33.92s 2026-03-09 01:07:55.799668 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 8.98s 2026-03-09 01:07:55.799682 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 8.55s 2026-03-09 01:07:55.799696 | orchestrator | glance : Copying over config.json files for services -------------------- 8.23s 2026-03-09 01:07:55.799711 | orchestrator | service-ks-register : glance | Creating/deleting endpoints -------------- 8.03s 2026-03-09 01:07:55.799725 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 7.19s 2026-03-09 01:07:55.799740 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 6.77s 2026-03-09 01:07:55.799755 | orchestrator | service-check-containers : glance | Check containers -------------------- 6.69s 2026-03-09 01:07:55.799769 | orchestrator | service-check-containers : Include tasks -------------------------------- 6.53s 2026-03-09 01:07:55.799784 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 6.21s 2026-03-09 01:07:55.799797 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 6.09s 2026-03-09 01:07:55.799806 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 5.80s 2026-03-09 01:07:55.799815 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 5.17s 2026-03-09 01:07:55.799824 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 5.14s 2026-03-09 01:07:55.799842 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.85s 2026-03-09 01:07:55.799851 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.47s 2026-03-09 01:07:55.799860 | orchestrator | service-ks-register : glance | Creating/deleting services --------------- 4.37s 2026-03-09 01:07:55.799869 | orchestrator | service-ks-register : glance | Granting/revoking user roles ------------- 4.36s 2026-03-09 01:07:55.799878 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.34s 2026-03-09 01:07:55.799887 | orchestrator | 2026-03-09 01:07:55 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:58.847550 | orchestrator | 2026-03-09 01:07:58 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:07:58.848683 | orchestrator | 2026-03-09 01:07:58 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:07:58.849738 | orchestrator | 2026-03-09 01:07:58 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:07:58.851188 | orchestrator | 2026-03-09 01:07:58 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:07:58.851749 | orchestrator | 2026-03-09 01:07:58 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:01.902788 | orchestrator | 2026-03-09 01:08:01 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:08:01.904245 | orchestrator | 2026-03-09 01:08:01 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:08:01.906226 | orchestrator | 2026-03-09 01:08:01 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:08:01.908238 | orchestrator | 2026-03-09 01:08:01 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:08:01.908301 | orchestrator | 2026-03-09 01:08:01 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:04.952382 | orchestrator | 2026-03-09 01:08:04 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:08:04.953636 | orchestrator | 2026-03-09 01:08:04 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:08:04.954937 | orchestrator | 2026-03-09 01:08:04 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:08:04.956885 | orchestrator | 2026-03-09 01:08:04 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:08:04.956938 | orchestrator | 2026-03-09 01:08:04 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:08.004810 | orchestrator | 2026-03-09 01:08:08 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:08:08.006575 | orchestrator | 2026-03-09 01:08:08 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:08:08.009283 | orchestrator | 2026-03-09 01:08:08 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:08:08.011453 | orchestrator | 2026-03-09 01:08:08 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:08:08.011516 | orchestrator | 2026-03-09 01:08:08 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:11.060465 | orchestrator | 2026-03-09 01:08:11 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:08:11.062584 | orchestrator | 2026-03-09 01:08:11 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:08:11.064770 | orchestrator | 2026-03-09 01:08:11 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:08:11.067310 | orchestrator | 2026-03-09 01:08:11 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:08:11.067407 | orchestrator | 2026-03-09 01:08:11 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:14.146863 | orchestrator | 2026-03-09 01:08:14 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:08:14.147467 | orchestrator | 2026-03-09 01:08:14 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:08:14.149008 | orchestrator | 2026-03-09 01:08:14 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:08:14.149951 | orchestrator | 2026-03-09 01:08:14 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:08:14.150004 | orchestrator | 2026-03-09 01:08:14 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:17.198886 | orchestrator | 2026-03-09 01:08:17 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:08:17.203033 | orchestrator | 2026-03-09 01:08:17 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:08:17.206127 | orchestrator | 2026-03-09 01:08:17 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:08:17.208378 | orchestrator | 2026-03-09 01:08:17 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:08:17.208737 | orchestrator | 2026-03-09 01:08:17 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:20.244863 | orchestrator | 2026-03-09 01:08:20 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:08:20.246635 | orchestrator | 2026-03-09 01:08:20 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:08:20.247215 | orchestrator | 2026-03-09 01:08:20 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state STARTED 2026-03-09 01:08:20.248993 | orchestrator | 2026-03-09 01:08:20 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:08:20.249032 | orchestrator | 2026-03-09 01:08:20 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:23.286467 | orchestrator | 2026-03-09 01:08:23 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:08:23.288077 | orchestrator | 2026-03-09 01:08:23 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:08:23.293737 | orchestrator | 2026-03-09 01:08:23 | INFO  | Task 7b2e1df7-e5ec-4253-ba46-133268297275 is in state SUCCESS 2026-03-09 01:08:23.295586 | orchestrator | 2026-03-09 01:08:23.295637 | orchestrator | 2026-03-09 01:08:23.295652 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:08:23.295668 | orchestrator | 2026-03-09 01:08:23.295679 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:08:23.295691 | orchestrator | Monday 09 March 2026 01:04:33 +0000 (0:00:00.284) 0:00:00.284 ********** 2026-03-09 01:08:23.295702 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:08:23.295715 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:08:23.295748 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:08:23.295757 | orchestrator | 2026-03-09 01:08:23.295764 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:08:23.295771 | orchestrator | Monday 09 March 2026 01:04:34 +0000 (0:00:00.365) 0:00:00.650 ********** 2026-03-09 01:08:23.295777 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-03-09 01:08:23.295785 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-03-09 01:08:23.295792 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-03-09 01:08:23.295798 | orchestrator | 2026-03-09 01:08:23.295805 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-03-09 01:08:23.295812 | orchestrator | 2026-03-09 01:08:23.295839 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-09 01:08:23.295846 | orchestrator | Monday 09 March 2026 01:04:34 +0000 (0:00:00.476) 0:00:01.126 ********** 2026-03-09 01:08:23.295853 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:08:23.295861 | orchestrator | 2026-03-09 01:08:23.295867 | orchestrator | TASK [service-ks-register : cinder | Creating/deleting services] *************** 2026-03-09 01:08:23.295874 | orchestrator | Monday 09 March 2026 01:04:35 +0000 (0:00:00.732) 0:00:01.858 ********** 2026-03-09 01:08:23.295881 | orchestrator | changed: [testbed-node-0] => (item=cinder (block-storage)) 2026-03-09 01:08:23.296013 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-03-09 01:08:23.296023 | orchestrator | 2026-03-09 01:08:23.296425 | orchestrator | TASK [service-ks-register : cinder | Creating/deleting endpoints] ************** 2026-03-09 01:08:23.296438 | orchestrator | Monday 09 March 2026 01:04:42 +0000 (0:00:07.471) 0:00:09.330 ********** 2026-03-09 01:08:23.296445 | orchestrator | changed: [testbed-node-0] => (item=cinder -> https://api-int.testbed.osism.xyz:8776/v3 -> internal) 2026-03-09 01:08:23.296453 | orchestrator | changed: [testbed-node-0] => (item=cinder -> https://api.testbed.osism.xyz:8776/v3 -> public) 2026-03-09 01:08:23.296461 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-03-09 01:08:23.296468 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-03-09 01:08:23.296475 | orchestrator | 2026-03-09 01:08:23.296482 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-03-09 01:08:23.296489 | orchestrator | Monday 09 March 2026 01:04:56 +0000 (0:00:13.770) 0:00:23.100 ********** 2026-03-09 01:08:23.296495 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-09 01:08:23.296502 | orchestrator | 2026-03-09 01:08:23.296509 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-03-09 01:08:23.296516 | orchestrator | Monday 09 March 2026 01:04:59 +0000 (0:00:03.222) 0:00:26.322 ********** 2026-03-09 01:08:23.296523 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-03-09 01:08:23.296530 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-09 01:08:23.296536 | orchestrator | 2026-03-09 01:08:23.296543 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-03-09 01:08:23.296550 | orchestrator | Monday 09 March 2026 01:05:04 +0000 (0:00:04.275) 0:00:30.597 ********** 2026-03-09 01:08:23.296556 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-09 01:08:23.296563 | orchestrator | 2026-03-09 01:08:23.296570 | orchestrator | TASK [service-ks-register : cinder | Granting/revoking user roles] ************* 2026-03-09 01:08:23.296576 | orchestrator | Monday 09 March 2026 01:05:08 +0000 (0:00:03.927) 0:00:34.525 ********** 2026-03-09 01:08:23.296583 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-03-09 01:08:23.296589 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-03-09 01:08:23.296596 | orchestrator | 2026-03-09 01:08:23.296603 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-03-09 01:08:23.296609 | orchestrator | Monday 09 March 2026 01:05:16 +0000 (0:00:08.234) 0:00:42.759 ********** 2026-03-09 01:08:23.296643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:08:23.296692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:08:23.296709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:08:23.296724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.296735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.296743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.296768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.296777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.296785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.296793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.296800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.296807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.296818 | orchestrator | 2026-03-09 01:08:23.296884 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-09 01:08:23.296893 | orchestrator | Monday 09 March 2026 01:05:20 +0000 (0:00:04.368) 0:00:47.127 ********** 2026-03-09 01:08:23.296900 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:08:23.296907 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:08:23.296914 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:08:23.296921 | orchestrator | 2026-03-09 01:08:23.297489 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-09 01:08:23.297530 | orchestrator | Monday 09 March 2026 01:05:21 +0000 (0:00:00.628) 0:00:47.756 ********** 2026-03-09 01:08:23.297542 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:08:23.297550 | orchestrator | 2026-03-09 01:08:23.297560 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-03-09 01:08:23.297571 | orchestrator | Monday 09 March 2026 01:05:22 +0000 (0:00:01.327) 0:00:49.084 ********** 2026-03-09 01:08:23.297584 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-03-09 01:08:23.297599 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-03-09 01:08:23.297610 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-03-09 01:08:23.297620 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-03-09 01:08:23.297631 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-03-09 01:08:23.297641 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-03-09 01:08:23.297652 | orchestrator | 2026-03-09 01:08:23.297662 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-03-09 01:08:23.297671 | orchestrator | Monday 09 March 2026 01:05:25 +0000 (0:00:02.923) 0:00:52.007 ********** 2026-03-09 01:08:23.297684 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-03-09 01:08:23.297697 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-03-09 01:08:23.297770 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-03-09 01:08:23.297793 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-03-09 01:08:23.297807 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-03-09 01:08:23.297820 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-03-09 01:08:23.297841 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-03-09 01:08:23.297915 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-03-09 01:08:23.297927 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-03-09 01:08:23.297935 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-03-09 01:08:23.297951 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-03-09 01:08:23.297998 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-03-09 01:08:23.298067 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-03-09 01:08:23.298082 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-03-09 01:08:23.298091 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-03-09 01:08:23.298107 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-03-09 01:08:23.298145 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-03-09 01:08:23.298156 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-03-09 01:08:23.298164 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-03-09 01:08:23.298173 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-03-09 01:08:23.298186 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-03-09 01:08:23.298213 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-03-09 01:08:23.298226 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-03-09 01:08:23.298235 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-03-09 01:08:23.298243 | orchestrator | 2026-03-09 01:08:23.298251 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-03-09 01:08:23.298259 | orchestrator | Monday 09 March 2026 01:05:35 +0000 (0:00:09.620) 0:01:01.627 ********** 2026-03-09 01:08:23.298272 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-03-09 01:08:23.298281 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-03-09 01:08:23.298290 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-03-09 01:08:23.298298 | orchestrator | 2026-03-09 01:08:23.298306 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-03-09 01:08:23.298314 | orchestrator | Monday 09 March 2026 01:05:38 +0000 (0:00:03.414) 0:01:05.042 ********** 2026-03-09 01:08:23.298322 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-03-09 01:08:23.298330 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-03-09 01:08:23.298361 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-03-09 01:08:23.298369 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-03-09 01:08:23.298377 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-03-09 01:08:23.298385 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-03-09 01:08:23.298392 | orchestrator | 2026-03-09 01:08:23.298401 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-03-09 01:08:23.298409 | orchestrator | Monday 09 March 2026 01:05:42 +0000 (0:00:03.482) 0:01:08.525 ********** 2026-03-09 01:08:23.298417 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-03-09 01:08:23.298425 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-03-09 01:08:23.298434 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-03-09 01:08:23.298462 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-03-09 01:08:23.298470 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-03-09 01:08:23.298477 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-03-09 01:08:23.298487 | orchestrator | 2026-03-09 01:08:23.298499 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-03-09 01:08:23.298510 | orchestrator | Monday 09 March 2026 01:05:43 +0000 (0:00:01.303) 0:01:09.829 ********** 2026-03-09 01:08:23.298521 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:08:23.298533 | orchestrator | 2026-03-09 01:08:23.298550 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-03-09 01:08:23.298560 | orchestrator | Monday 09 March 2026 01:05:43 +0000 (0:00:00.173) 0:01:10.003 ********** 2026-03-09 01:08:23.298571 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:08:23.298582 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:08:23.298593 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:08:23.298604 | orchestrator | 2026-03-09 01:08:23.298615 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-09 01:08:23.298626 | orchestrator | Monday 09 March 2026 01:05:43 +0000 (0:00:00.396) 0:01:10.399 ********** 2026-03-09 01:08:23.298638 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:08:23.298651 | orchestrator | 2026-03-09 01:08:23.298663 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-03-09 01:08:23.298685 | orchestrator | Monday 09 March 2026 01:05:45 +0000 (0:00:01.235) 0:01:11.635 ********** 2026-03-09 01:08:23.298699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:08:23.298714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:08:23.298762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:08:23.298777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.298786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.298799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.298806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.298814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.298821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.298851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.298860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.298871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.298878 | orchestrator | 2026-03-09 01:08:23.298885 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-03-09 01:08:23.298892 | orchestrator | Monday 09 March 2026 01:05:51 +0000 (0:00:06.438) 0:01:18.074 ********** 2026-03-09 01:08:23.298899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:08:23.298907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:08:23.298937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 01:08:23.298946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 01:08:23.298960 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:08:23.298968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:08:23.298976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:08:23.298983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:08:23.299012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:08:23.299028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 01:08:23.299036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 01:08:23.299043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 01:08:23.299050 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:08:23.299057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 01:08:23.299064 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:08:23.299071 | orchestrator | 2026-03-09 01:08:23.299078 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-03-09 01:08:23.299084 | orchestrator | Monday 09 March 2026 01:05:53 +0000 (0:00:01.646) 0:01:19.721 ********** 2026-03-09 01:08:23.299095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:08:23.299111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:08:23.299119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 01:08:23.299126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 01:08:23.299133 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:08:23.299140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:08:23.299148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:08:23.299169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 01:08:23.299177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 01:08:23.299184 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:08:23.299192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:08:23.299199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:08:23.299206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 01:08:23.299218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 01:08:23.299230 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:08:23.299237 | orchestrator | 2026-03-09 01:08:23.299247 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-03-09 01:08:23.299254 | orchestrator | Monday 09 March 2026 01:05:55 +0000 (0:00:02.649) 0:01:22.370 ********** 2026-03-09 01:08:23.299261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:08:23.299269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:08:23.299277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:08:23.299292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.299306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.299320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.299394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.299411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.299423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.299447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.299460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.299467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.299474 | orchestrator | 2026-03-09 01:08:23.299481 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-03-09 01:08:23.299488 | orchestrator | Monday 09 March 2026 01:06:01 +0000 (0:00:05.884) 0:01:28.254 ********** 2026-03-09 01:08:23.299495 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-03-09 01:08:23.299502 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:08:23.299509 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-03-09 01:08:23.299516 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:08:23.299522 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-03-09 01:08:23.299529 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:08:23.299536 | orchestrator | 2026-03-09 01:08:23.299543 | orchestrator | TASK [Configure uWSGI for Cinder] ********************************************** 2026-03-09 01:08:23.299549 | orchestrator | Monday 09 March 2026 01:06:03 +0000 (0:00:01.912) 0:01:30.166 ********** 2026-03-09 01:08:23.299556 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:08:23.299563 | orchestrator | 2026-03-09 01:08:23.299570 | orchestrator | TASK [service-uwsgi-config : Copying over cinder-api uWSGI config] ************* 2026-03-09 01:08:23.299576 | orchestrator | Monday 09 March 2026 01:06:06 +0000 (0:00:02.299) 0:01:32.466 ********** 2026-03-09 01:08:23.299583 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:08:23.299590 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:08:23.299596 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:08:23.299603 | orchestrator | 2026-03-09 01:08:23.299609 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-03-09 01:08:23.299616 | orchestrator | Monday 09 March 2026 01:06:08 +0000 (0:00:02.592) 0:01:35.059 ********** 2026-03-09 01:08:23.299628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:08:23.299645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:08:23.299653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:08:23.299661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.299668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.299682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.299693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.299703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.299711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.299718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.299725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.299737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.299744 | orchestrator | 2026-03-09 01:08:23.299751 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-03-09 01:08:23.299757 | orchestrator | Monday 09 March 2026 01:06:26 +0000 (0:00:17.650) 0:01:52.709 ********** 2026-03-09 01:08:23.299764 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:08:23.299771 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:08:23.299778 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:08:23.299784 | orchestrator | 2026-03-09 01:08:23.299791 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-03-09 01:08:23.299798 | orchestrator | Monday 09 March 2026 01:06:28 +0000 (0:00:02.091) 0:01:54.801 ********** 2026-03-09 01:08:23.299814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:08:23.299822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:08:23.299829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 01:08:23.299841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 01:08:23.299849 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:08:23.299856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:08:23.299872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:08:23.299879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 01:08:23.299886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 01:08:23.299898 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:08:23.299905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:08:23.299912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:08:23.299927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 01:08:23.299938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 01:08:23.299945 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:08:23.299952 | orchestrator | 2026-03-09 01:08:23.299959 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-03-09 01:08:23.299966 | orchestrator | Monday 09 March 2026 01:06:29 +0000 (0:00:00.707) 0:01:55.508 ********** 2026-03-09 01:08:23.299973 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:08:23.299979 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:08:23.299986 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:08:23.299993 | orchestrator | 2026-03-09 01:08:23.299999 | orchestrator | TASK [service-check-containers : cinder | Check containers] ******************** 2026-03-09 01:08:23.300011 | orchestrator | Monday 09 March 2026 01:06:29 +0000 (0:00:00.371) 0:01:55.879 ********** 2026-03-09 01:08:23.300019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:08:23.300026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:08:23.300041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:08:23.300049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.300056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.300068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.300075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.300082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.300094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.300104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.300115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.300123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:08:23.300130 | orchestrator | 2026-03-09 01:08:23.300137 | orchestrator | TASK [service-check-containers : cinder | Notify handlers to restart containers] *** 2026-03-09 01:08:23.300143 | orchestrator | Monday 09 March 2026 01:06:32 +0000 (0:00:03.027) 0:01:58.907 ********** 2026-03-09 01:08:23.300151 | orchestrator | changed: [testbed-node-0] => { 2026-03-09 01:08:23.300157 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:08:23.300164 | orchestrator | } 2026-03-09 01:08:23.300171 | orchestrator | changed: [testbed-node-1] => { 2026-03-09 01:08:23.300178 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:08:23.300185 | orchestrator | } 2026-03-09 01:08:23.300191 | orchestrator | changed: [testbed-node-2] => { 2026-03-09 01:08:23.300198 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:08:23.300205 | orchestrator | } 2026-03-09 01:08:23.300212 | orchestrator | 2026-03-09 01:08:23.300218 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-09 01:08:23.300225 | orchestrator | Monday 09 March 2026 01:06:33 +0000 (0:00:00.881) 0:01:59.788 ********** 2026-03-09 01:08:23.300232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:08:23.300248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:08:23.300261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:08:23.300269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:08:23.300276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 01:08:23.300283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 01:08:23.300294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 01:08:23.300301 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:08:23.300313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 01:08:23.300327 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:08:23.300352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:08:23.300360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:08:23.300368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 01:08:23.300375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 01:08:23.300382 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:08:23.300388 | orchestrator | 2026-03-09 01:08:23.300399 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-09 01:08:23.300413 | orchestrator | Monday 09 March 2026 01:06:36 +0000 (0:00:03.257) 0:02:03.045 ********** 2026-03-09 01:08:23.300420 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:08:23.300426 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:08:23.300433 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:08:23.300440 | orchestrator | 2026-03-09 01:08:23.300446 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-03-09 01:08:23.300457 | orchestrator | Monday 09 March 2026 01:06:37 +0000 (0:00:00.656) 0:02:03.701 ********** 2026-03-09 01:08:23.300464 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:08:23.300471 | orchestrator | 2026-03-09 01:08:23.300478 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-03-09 01:08:23.300485 | orchestrator | Monday 09 March 2026 01:06:39 +0000 (0:00:02.370) 0:02:06.072 ********** 2026-03-09 01:08:23.300491 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:08:23.300498 | orchestrator | 2026-03-09 01:08:23.300505 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-03-09 01:08:23.300511 | orchestrator | Monday 09 March 2026 01:06:42 +0000 (0:00:03.111) 0:02:09.183 ********** 2026-03-09 01:08:23.300518 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:08:23.300525 | orchestrator | 2026-03-09 01:08:23.300531 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-09 01:08:23.300538 | orchestrator | Monday 09 March 2026 01:07:02 +0000 (0:00:19.865) 0:02:29.049 ********** 2026-03-09 01:08:23.300545 | orchestrator | 2026-03-09 01:08:23.300551 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-09 01:08:23.300558 | orchestrator | Monday 09 March 2026 01:07:02 +0000 (0:00:00.077) 0:02:29.127 ********** 2026-03-09 01:08:23.300565 | orchestrator | 2026-03-09 01:08:23.300571 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-09 01:08:23.300578 | orchestrator | Monday 09 March 2026 01:07:02 +0000 (0:00:00.069) 0:02:29.197 ********** 2026-03-09 01:08:23.300585 | orchestrator | 2026-03-09 01:08:23.300591 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-03-09 01:08:23.300598 | orchestrator | Monday 09 March 2026 01:07:02 +0000 (0:00:00.070) 0:02:29.267 ********** 2026-03-09 01:08:23.300605 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:08:23.300612 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:08:23.300619 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:08:23.300626 | orchestrator | 2026-03-09 01:08:23.300632 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-03-09 01:08:23.300639 | orchestrator | Monday 09 March 2026 01:07:34 +0000 (0:00:31.416) 0:03:00.684 ********** 2026-03-09 01:08:23.300646 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:08:23.300652 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:08:23.300659 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:08:23.300666 | orchestrator | 2026-03-09 01:08:23.300673 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-03-09 01:08:23.300679 | orchestrator | Monday 09 March 2026 01:07:44 +0000 (0:00:10.504) 0:03:11.188 ********** 2026-03-09 01:08:23.300686 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:08:23.300693 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:08:23.300699 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:08:23.300706 | orchestrator | 2026-03-09 01:08:23.300713 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-03-09 01:08:23.300720 | orchestrator | Monday 09 March 2026 01:08:10 +0000 (0:00:26.018) 0:03:37.207 ********** 2026-03-09 01:08:23.300726 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:08:23.300733 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:08:23.300740 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:08:23.300746 | orchestrator | 2026-03-09 01:08:23.300753 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-03-09 01:08:23.300760 | orchestrator | Monday 09 March 2026 01:08:21 +0000 (0:00:10.749) 0:03:47.957 ********** 2026-03-09 01:08:23.300767 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:08:23.300778 | orchestrator | 2026-03-09 01:08:23.300785 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:08:23.300792 | orchestrator | testbed-node-0 : ok=32  changed=23  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-09 01:08:23.300799 | orchestrator | testbed-node-1 : ok=23  changed=16  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-09 01:08:23.300806 | orchestrator | testbed-node-2 : ok=23  changed=16  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-09 01:08:23.300813 | orchestrator | 2026-03-09 01:08:23.300820 | orchestrator | 2026-03-09 01:08:23.300826 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:08:23.300833 | orchestrator | Monday 09 March 2026 01:08:21 +0000 (0:00:00.322) 0:03:48.279 ********** 2026-03-09 01:08:23.300840 | orchestrator | =============================================================================== 2026-03-09 01:08:23.300847 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 31.42s 2026-03-09 01:08:23.300853 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 26.02s 2026-03-09 01:08:23.300860 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 19.87s 2026-03-09 01:08:23.300867 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 17.65s 2026-03-09 01:08:23.300873 | orchestrator | service-ks-register : cinder | Creating/deleting endpoints ------------- 13.77s 2026-03-09 01:08:23.300880 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.75s 2026-03-09 01:08:23.300887 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.50s 2026-03-09 01:08:23.300893 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 9.62s 2026-03-09 01:08:23.300908 | orchestrator | service-ks-register : cinder | Granting/revoking user roles ------------- 8.24s 2026-03-09 01:08:23.300919 | orchestrator | service-ks-register : cinder | Creating/deleting services --------------- 7.47s 2026-03-09 01:08:23.300930 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 6.44s 2026-03-09 01:08:23.300941 | orchestrator | cinder : Copying over config.json files for services -------------------- 5.88s 2026-03-09 01:08:23.300956 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 4.37s 2026-03-09 01:08:23.300968 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.27s 2026-03-09 01:08:23.300979 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.93s 2026-03-09 01:08:23.300991 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.48s 2026-03-09 01:08:23.301003 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 3.41s 2026-03-09 01:08:23.301013 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.26s 2026-03-09 01:08:23.301023 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.22s 2026-03-09 01:08:23.301030 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 3.11s 2026-03-09 01:08:23.301037 | orchestrator | 2026-03-09 01:08:23 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:08:23.301044 | orchestrator | 2026-03-09 01:08:23 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:26.344149 | orchestrator | 2026-03-09 01:08:26 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:08:26.344278 | orchestrator | 2026-03-09 01:08:26 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:08:26.345020 | orchestrator | 2026-03-09 01:08:26 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:08:26.346910 | orchestrator | 2026-03-09 01:08:26 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:08:26.346996 | orchestrator | 2026-03-09 01:08:26 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:29.391636 | orchestrator | 2026-03-09 01:08:29 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:08:29.392473 | orchestrator | 2026-03-09 01:08:29 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:08:29.393075 | orchestrator | 2026-03-09 01:08:29 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:08:29.393964 | orchestrator | 2026-03-09 01:08:29 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:08:29.394002 | orchestrator | 2026-03-09 01:08:29 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:32.438794 | orchestrator | 2026-03-09 01:08:32 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:08:32.439655 | orchestrator | 2026-03-09 01:08:32 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:08:32.442093 | orchestrator | 2026-03-09 01:08:32 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:08:32.443015 | orchestrator | 2026-03-09 01:08:32 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:08:32.443050 | orchestrator | 2026-03-09 01:08:32 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:35.490009 | orchestrator | 2026-03-09 01:08:35 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:08:35.492128 | orchestrator | 2026-03-09 01:08:35 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:08:35.493909 | orchestrator | 2026-03-09 01:08:35 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:08:35.496507 | orchestrator | 2026-03-09 01:08:35 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:08:35.496563 | orchestrator | 2026-03-09 01:08:35 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:38.540098 | orchestrator | 2026-03-09 01:08:38 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:08:38.541240 | orchestrator | 2026-03-09 01:08:38 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:08:38.544154 | orchestrator | 2026-03-09 01:08:38 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:08:38.546617 | orchestrator | 2026-03-09 01:08:38 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:08:38.546907 | orchestrator | 2026-03-09 01:08:38 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:41.591756 | orchestrator | 2026-03-09 01:08:41 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:08:41.592812 | orchestrator | 2026-03-09 01:08:41 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:08:41.595942 | orchestrator | 2026-03-09 01:08:41 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:08:41.597622 | orchestrator | 2026-03-09 01:08:41 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:08:41.597664 | orchestrator | 2026-03-09 01:08:41 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:44.644534 | orchestrator | 2026-03-09 01:08:44 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:08:44.645734 | orchestrator | 2026-03-09 01:08:44 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:08:44.647882 | orchestrator | 2026-03-09 01:08:44 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:08:44.649753 | orchestrator | 2026-03-09 01:08:44 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:08:44.649814 | orchestrator | 2026-03-09 01:08:44 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:47.696545 | orchestrator | 2026-03-09 01:08:47 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:08:47.696774 | orchestrator | 2026-03-09 01:08:47 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:08:47.697646 | orchestrator | 2026-03-09 01:08:47 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:08:47.699128 | orchestrator | 2026-03-09 01:08:47 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:08:47.699157 | orchestrator | 2026-03-09 01:08:47 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:50.741050 | orchestrator | 2026-03-09 01:08:50 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:08:50.741850 | orchestrator | 2026-03-09 01:08:50 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:08:50.744618 | orchestrator | 2026-03-09 01:08:50 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:08:50.747899 | orchestrator | 2026-03-09 01:08:50 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:08:50.747953 | orchestrator | 2026-03-09 01:08:50 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:53.785932 | orchestrator | 2026-03-09 01:08:53 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:08:53.786434 | orchestrator | 2026-03-09 01:08:53 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:08:53.787119 | orchestrator | 2026-03-09 01:08:53 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:08:53.787791 | orchestrator | 2026-03-09 01:08:53 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:08:53.787908 | orchestrator | 2026-03-09 01:08:53 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:56.820675 | orchestrator | 2026-03-09 01:08:56 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:08:56.821074 | orchestrator | 2026-03-09 01:08:56 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:08:56.822070 | orchestrator | 2026-03-09 01:08:56 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:08:56.822907 | orchestrator | 2026-03-09 01:08:56 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:08:56.822958 | orchestrator | 2026-03-09 01:08:56 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:59.858966 | orchestrator | 2026-03-09 01:08:59 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:08:59.859619 | orchestrator | 2026-03-09 01:08:59 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:08:59.860635 | orchestrator | 2026-03-09 01:08:59 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:08:59.861525 | orchestrator | 2026-03-09 01:08:59 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:08:59.861549 | orchestrator | 2026-03-09 01:08:59 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:02.895624 | orchestrator | 2026-03-09 01:09:02 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:09:02.896057 | orchestrator | 2026-03-09 01:09:02 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:09:02.897140 | orchestrator | 2026-03-09 01:09:02 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:09:02.897729 | orchestrator | 2026-03-09 01:09:02 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:09:02.897798 | orchestrator | 2026-03-09 01:09:02 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:05.929572 | orchestrator | 2026-03-09 01:09:05 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:09:05.930159 | orchestrator | 2026-03-09 01:09:05 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:09:05.933011 | orchestrator | 2026-03-09 01:09:05 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:09:05.933316 | orchestrator | 2026-03-09 01:09:05 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:09:05.933334 | orchestrator | 2026-03-09 01:09:05 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:08.958436 | orchestrator | 2026-03-09 01:09:08 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:09:08.960519 | orchestrator | 2026-03-09 01:09:08 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:09:08.960605 | orchestrator | 2026-03-09 01:09:08 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:09:08.961608 | orchestrator | 2026-03-09 01:09:08 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:09:08.961664 | orchestrator | 2026-03-09 01:09:08 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:11.992216 | orchestrator | 2026-03-09 01:09:11 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:09:11.993752 | orchestrator | 2026-03-09 01:09:11 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:09:11.994617 | orchestrator | 2026-03-09 01:09:11 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:09:11.995550 | orchestrator | 2026-03-09 01:09:11 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:09:11.995575 | orchestrator | 2026-03-09 01:09:11 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:15.040095 | orchestrator | 2026-03-09 01:09:15 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:09:15.041709 | orchestrator | 2026-03-09 01:09:15 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:09:15.042466 | orchestrator | 2026-03-09 01:09:15 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:09:15.043353 | orchestrator | 2026-03-09 01:09:15 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:09:15.043422 | orchestrator | 2026-03-09 01:09:15 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:18.079552 | orchestrator | 2026-03-09 01:09:18 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:09:18.079616 | orchestrator | 2026-03-09 01:09:18 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:09:18.079627 | orchestrator | 2026-03-09 01:09:18 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:09:18.079634 | orchestrator | 2026-03-09 01:09:18 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:09:18.079657 | orchestrator | 2026-03-09 01:09:18 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:21.128842 | orchestrator | 2026-03-09 01:09:21 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:09:21.129251 | orchestrator | 2026-03-09 01:09:21 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:09:21.129869 | orchestrator | 2026-03-09 01:09:21 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:09:21.130707 | orchestrator | 2026-03-09 01:09:21 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:09:21.130743 | orchestrator | 2026-03-09 01:09:21 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:24.152509 | orchestrator | 2026-03-09 01:09:24 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:09:24.155222 | orchestrator | 2026-03-09 01:09:24 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:09:24.155268 | orchestrator | 2026-03-09 01:09:24 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:09:24.155275 | orchestrator | 2026-03-09 01:09:24 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:09:24.155295 | orchestrator | 2026-03-09 01:09:24 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:27.181079 | orchestrator | 2026-03-09 01:09:27 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:09:27.181653 | orchestrator | 2026-03-09 01:09:27 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:09:27.182550 | orchestrator | 2026-03-09 01:09:27 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:09:27.183331 | orchestrator | 2026-03-09 01:09:27 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:09:27.183505 | orchestrator | 2026-03-09 01:09:27 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:30.206505 | orchestrator | 2026-03-09 01:09:30 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:09:30.207119 | orchestrator | 2026-03-09 01:09:30 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:09:30.207681 | orchestrator | 2026-03-09 01:09:30 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:09:30.214883 | orchestrator | 2026-03-09 01:09:30 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:09:30.214927 | orchestrator | 2026-03-09 01:09:30 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:33.242629 | orchestrator | 2026-03-09 01:09:33 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:09:33.245057 | orchestrator | 2026-03-09 01:09:33 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:09:33.245684 | orchestrator | 2026-03-09 01:09:33 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:09:33.246621 | orchestrator | 2026-03-09 01:09:33 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:09:33.246667 | orchestrator | 2026-03-09 01:09:33 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:36.272202 | orchestrator | 2026-03-09 01:09:36 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:09:36.272495 | orchestrator | 2026-03-09 01:09:36 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:09:36.273630 | orchestrator | 2026-03-09 01:09:36 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:09:36.274241 | orchestrator | 2026-03-09 01:09:36 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:09:36.274258 | orchestrator | 2026-03-09 01:09:36 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:39.299424 | orchestrator | 2026-03-09 01:09:39 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:09:39.301313 | orchestrator | 2026-03-09 01:09:39 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:09:39.301350 | orchestrator | 2026-03-09 01:09:39 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:09:39.301359 | orchestrator | 2026-03-09 01:09:39 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:09:39.301366 | orchestrator | 2026-03-09 01:09:39 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:42.337364 | orchestrator | 2026-03-09 01:09:42 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:09:42.337577 | orchestrator | 2026-03-09 01:09:42 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:09:42.338488 | orchestrator | 2026-03-09 01:09:42 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:09:42.339074 | orchestrator | 2026-03-09 01:09:42 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:09:42.339119 | orchestrator | 2026-03-09 01:09:42 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:45.377419 | orchestrator | 2026-03-09 01:09:45 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:09:45.378081 | orchestrator | 2026-03-09 01:09:45 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:09:45.380305 | orchestrator | 2026-03-09 01:09:45 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:09:45.381699 | orchestrator | 2026-03-09 01:09:45 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:09:45.381974 | orchestrator | 2026-03-09 01:09:45 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:48.425789 | orchestrator | 2026-03-09 01:09:48 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:09:48.425872 | orchestrator | 2026-03-09 01:09:48 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:09:48.426554 | orchestrator | 2026-03-09 01:09:48 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:09:48.427633 | orchestrator | 2026-03-09 01:09:48 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:09:48.427673 | orchestrator | 2026-03-09 01:09:48 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:51.476459 | orchestrator | 2026-03-09 01:09:51 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:09:51.477130 | orchestrator | 2026-03-09 01:09:51 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:09:51.478073 | orchestrator | 2026-03-09 01:09:51 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:09:51.478907 | orchestrator | 2026-03-09 01:09:51 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:09:51.478964 | orchestrator | 2026-03-09 01:09:51 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:54.504094 | orchestrator | 2026-03-09 01:09:54 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:09:54.504321 | orchestrator | 2026-03-09 01:09:54 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:09:54.504845 | orchestrator | 2026-03-09 01:09:54 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:09:54.505413 | orchestrator | 2026-03-09 01:09:54 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:09:54.505432 | orchestrator | 2026-03-09 01:09:54 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:57.530352 | orchestrator | 2026-03-09 01:09:57 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:09:57.530499 | orchestrator | 2026-03-09 01:09:57 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:09:57.531131 | orchestrator | 2026-03-09 01:09:57 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:09:57.531590 | orchestrator | 2026-03-09 01:09:57 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:09:57.531616 | orchestrator | 2026-03-09 01:09:57 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:00.552446 | orchestrator | 2026-03-09 01:10:00 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:10:00.552941 | orchestrator | 2026-03-09 01:10:00 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:10:00.553830 | orchestrator | 2026-03-09 01:10:00 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:10:00.554806 | orchestrator | 2026-03-09 01:10:00 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:10:00.554825 | orchestrator | 2026-03-09 01:10:00 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:03.579450 | orchestrator | 2026-03-09 01:10:03 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:10:03.579852 | orchestrator | 2026-03-09 01:10:03 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:10:03.580926 | orchestrator | 2026-03-09 01:10:03 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:10:03.582013 | orchestrator | 2026-03-09 01:10:03 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:10:03.582137 | orchestrator | 2026-03-09 01:10:03 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:06.607293 | orchestrator | 2026-03-09 01:10:06 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:10:06.607921 | orchestrator | 2026-03-09 01:10:06 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:10:06.608509 | orchestrator | 2026-03-09 01:10:06 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:10:06.609229 | orchestrator | 2026-03-09 01:10:06 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:10:06.609256 | orchestrator | 2026-03-09 01:10:06 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:09.638090 | orchestrator | 2026-03-09 01:10:09 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:10:09.638339 | orchestrator | 2026-03-09 01:10:09 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:10:09.639549 | orchestrator | 2026-03-09 01:10:09 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:10:09.640309 | orchestrator | 2026-03-09 01:10:09 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:10:09.640400 | orchestrator | 2026-03-09 01:10:09 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:12.678672 | orchestrator | 2026-03-09 01:10:12 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:10:12.679469 | orchestrator | 2026-03-09 01:10:12 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:10:12.680812 | orchestrator | 2026-03-09 01:10:12 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:10:12.681787 | orchestrator | 2026-03-09 01:10:12 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:10:12.681859 | orchestrator | 2026-03-09 01:10:12 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:15.712474 | orchestrator | 2026-03-09 01:10:15 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:10:15.714063 | orchestrator | 2026-03-09 01:10:15 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:10:15.716168 | orchestrator | 2026-03-09 01:10:15 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:10:15.716823 | orchestrator | 2026-03-09 01:10:15 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:10:15.716845 | orchestrator | 2026-03-09 01:10:15 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:18.753781 | orchestrator | 2026-03-09 01:10:18 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:10:18.754055 | orchestrator | 2026-03-09 01:10:18 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:10:18.754965 | orchestrator | 2026-03-09 01:10:18 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:10:18.755786 | orchestrator | 2026-03-09 01:10:18 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:10:18.755888 | orchestrator | 2026-03-09 01:10:18 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:21.793903 | orchestrator | 2026-03-09 01:10:21 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:10:21.794640 | orchestrator | 2026-03-09 01:10:21 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:10:21.795981 | orchestrator | 2026-03-09 01:10:21 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:10:21.797488 | orchestrator | 2026-03-09 01:10:21 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:10:21.797537 | orchestrator | 2026-03-09 01:10:21 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:24.827335 | orchestrator | 2026-03-09 01:10:24 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state STARTED 2026-03-09 01:10:24.827801 | orchestrator | 2026-03-09 01:10:24 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:10:24.829198 | orchestrator | 2026-03-09 01:10:24 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:10:24.830160 | orchestrator | 2026-03-09 01:10:24 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:10:24.830204 | orchestrator | 2026-03-09 01:10:24 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:27.874610 | orchestrator | 2026-03-09 01:10:27 | INFO  | Task cba195b4-a0d7-45db-ae02-b3da68c5d1c4 is in state SUCCESS 2026-03-09 01:10:27.876771 | orchestrator | 2026-03-09 01:10:27.876842 | orchestrator | 2026-03-09 01:10:27.876856 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:10:27.876868 | orchestrator | 2026-03-09 01:10:27.876904 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:10:27.876916 | orchestrator | Monday 09 March 2026 01:08:00 +0000 (0:00:00.333) 0:00:00.333 ********** 2026-03-09 01:10:27.876928 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:10:27.876940 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:10:27.876951 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:10:27.876961 | orchestrator | 2026-03-09 01:10:27.876972 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:10:27.876983 | orchestrator | Monday 09 March 2026 01:08:00 +0000 (0:00:00.331) 0:00:00.664 ********** 2026-03-09 01:10:27.876994 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-03-09 01:10:27.877020 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-03-09 01:10:27.877031 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-03-09 01:10:27.877042 | orchestrator | 2026-03-09 01:10:27.877053 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-03-09 01:10:27.877064 | orchestrator | 2026-03-09 01:10:27.877075 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-09 01:10:27.877086 | orchestrator | Monday 09 March 2026 01:08:01 +0000 (0:00:00.507) 0:00:01.172 ********** 2026-03-09 01:10:27.877097 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:10:27.877108 | orchestrator | 2026-03-09 01:10:27.877119 | orchestrator | TASK [service-ks-register : barbican | Creating/deleting services] ************* 2026-03-09 01:10:27.877130 | orchestrator | Monday 09 March 2026 01:08:02 +0000 (0:00:00.668) 0:00:01.840 ********** 2026-03-09 01:10:27.877141 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-03-09 01:10:27.877152 | orchestrator | 2026-03-09 01:10:27.877162 | orchestrator | TASK [service-ks-register : barbican | Creating/deleting endpoints] ************ 2026-03-09 01:10:27.877173 | orchestrator | Monday 09 March 2026 01:08:06 +0000 (0:00:03.991) 0:00:05.831 ********** 2026-03-09 01:10:27.877184 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-03-09 01:10:27.877195 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-03-09 01:10:27.877206 | orchestrator | 2026-03-09 01:10:27.877216 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-03-09 01:10:27.877228 | orchestrator | Monday 09 March 2026 01:08:12 +0000 (0:00:06.697) 0:00:12.529 ********** 2026-03-09 01:10:27.877239 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-09 01:10:27.877250 | orchestrator | 2026-03-09 01:10:27.877261 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-03-09 01:10:27.877272 | orchestrator | Monday 09 March 2026 01:08:16 +0000 (0:00:03.614) 0:00:16.144 ********** 2026-03-09 01:10:27.877585 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-03-09 01:10:27.877608 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-09 01:10:27.877621 | orchestrator | 2026-03-09 01:10:27.877633 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-03-09 01:10:27.877645 | orchestrator | Monday 09 March 2026 01:08:21 +0000 (0:00:04.845) 0:00:20.990 ********** 2026-03-09 01:10:27.877658 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-09 01:10:27.877671 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-03-09 01:10:27.877685 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-03-09 01:10:27.877696 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-03-09 01:10:27.877707 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-03-09 01:10:27.877718 | orchestrator | 2026-03-09 01:10:27.877728 | orchestrator | TASK [service-ks-register : barbican | Granting/revoking user roles] *********** 2026-03-09 01:10:27.877739 | orchestrator | Monday 09 March 2026 01:08:40 +0000 (0:00:19.322) 0:00:40.313 ********** 2026-03-09 01:10:27.877750 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-03-09 01:10:27.877773 | orchestrator | 2026-03-09 01:10:27.877784 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-03-09 01:10:27.877795 | orchestrator | Monday 09 March 2026 01:08:45 +0000 (0:00:04.458) 0:00:44.771 ********** 2026-03-09 01:10:27.877810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:10:27.877847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:10:27.877862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:10:27.877875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:10:27.877897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:10:27.877951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:10:27.877973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:10:27.877991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:10:27.878003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:10:27.878573 | orchestrator | 2026-03-09 01:10:27.878607 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-03-09 01:10:27.878619 | orchestrator | Monday 09 March 2026 01:08:47 +0000 (0:00:02.353) 0:00:47.124 ********** 2026-03-09 01:10:27.878631 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-03-09 01:10:27.878642 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-03-09 01:10:27.878653 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-03-09 01:10:27.878664 | orchestrator | 2026-03-09 01:10:27.878674 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-03-09 01:10:27.878685 | orchestrator | Monday 09 March 2026 01:08:49 +0000 (0:00:01.760) 0:00:48.885 ********** 2026-03-09 01:10:27.878708 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:10:27.878720 | orchestrator | 2026-03-09 01:10:27.878731 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-03-09 01:10:27.878741 | orchestrator | Monday 09 March 2026 01:08:49 +0000 (0:00:00.244) 0:00:49.130 ********** 2026-03-09 01:10:27.878752 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:10:27.878763 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:10:27.878774 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:10:27.878785 | orchestrator | 2026-03-09 01:10:27.878796 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-09 01:10:27.878806 | orchestrator | Monday 09 March 2026 01:08:50 +0000 (0:00:00.907) 0:00:50.037 ********** 2026-03-09 01:10:27.878817 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:10:27.878828 | orchestrator | 2026-03-09 01:10:27.878839 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-03-09 01:10:27.878850 | orchestrator | Monday 09 March 2026 01:08:51 +0000 (0:00:00.855) 0:00:50.893 ********** 2026-03-09 01:10:27.878863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:10:27.878927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:10:27.878942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:10:27.878963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:10:27.878976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:10:27.878988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:10:27.879008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:10:27.879026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:10:27.879038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:10:27.879065 | orchestrator | 2026-03-09 01:10:27.879076 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-03-09 01:10:27.879088 | orchestrator | Monday 09 March 2026 01:08:55 +0000 (0:00:04.204) 0:00:55.097 ********** 2026-03-09 01:10:27.879100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:10:27.879112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 01:10:27.879131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:10:27.879142 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:10:27.879160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:10:27.879172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 01:10:27.879190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:10:27.879202 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:10:27.879217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:10:27.879238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 01:10:27.879257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:10:27.879272 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:10:27.879284 | orchestrator | 2026-03-09 01:10:27.879296 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-03-09 01:10:27.879307 | orchestrator | Monday 09 March 2026 01:08:57 +0000 (0:00:02.021) 0:00:57.119 ********** 2026-03-09 01:10:27.879319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:10:27.879338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:10:27.879350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 01:10:27.879369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 01:10:27.879386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:10:27.879398 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:10:27.879479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:10:27.879515 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:10:27.879534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:10:27.879554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 01:10:27.879566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:10:27.879577 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:10:27.879589 | orchestrator | 2026-03-09 01:10:27.879600 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-03-09 01:10:27.879610 | orchestrator | Monday 09 March 2026 01:08:58 +0000 (0:00:01.399) 0:00:58.519 ********** 2026-03-09 01:10:27.879639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:10:27.879663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:10:27.879676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:10:27.879688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:10:27.879707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:10:27.879723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:10:27.879742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:10:27.879754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:10:27.879765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:10:27.879776 | orchestrator | 2026-03-09 01:10:27.879788 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-03-09 01:10:27.879801 | orchestrator | Monday 09 March 2026 01:09:03 +0000 (0:00:04.853) 0:01:03.372 ********** 2026-03-09 01:10:27.879819 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:10:27.879837 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:10:27.879853 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:10:27.879868 | orchestrator | 2026-03-09 01:10:27.879884 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-03-09 01:10:27.879900 | orchestrator | Monday 09 March 2026 01:09:06 +0000 (0:00:03.120) 0:01:06.492 ********** 2026-03-09 01:10:27.879917 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 01:10:27.879934 | orchestrator | 2026-03-09 01:10:27.879950 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-03-09 01:10:27.879967 | orchestrator | Monday 09 March 2026 01:09:08 +0000 (0:00:02.014) 0:01:08.506 ********** 2026-03-09 01:10:27.879983 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:10:27.879998 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:10:27.880013 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:10:27.880027 | orchestrator | 2026-03-09 01:10:27.880042 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-03-09 01:10:27.880056 | orchestrator | Monday 09 March 2026 01:09:10 +0000 (0:00:01.360) 0:01:09.867 ********** 2026-03-09 01:10:27.880082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:10:27.880119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:10:27.880139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:10:27.880157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:10:27.880175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:10:27.880210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:10:27.880235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:10:27.880253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:10:27.880269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:10:27.880285 | orchestrator | 2026-03-09 01:10:27.880295 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-03-09 01:10:27.880305 | orchestrator | Monday 09 March 2026 01:09:23 +0000 (0:00:13.530) 0:01:23.398 ********** 2026-03-09 01:10:27.880316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:10:27.880332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 01:10:27.880359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:10:27.880370 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:10:27.880381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:10:27.880391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 01:10:27.880401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:10:27.880434 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:10:27.880479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:10:27.880513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 01:10:27.880529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:10:27.880539 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:10:27.880550 | orchestrator | 2026-03-09 01:10:27.880559 | orchestrator | TASK [service-check-containers : barbican | Check containers] ****************** 2026-03-09 01:10:27.880569 | orchestrator | Monday 09 March 2026 01:09:24 +0000 (0:00:01.311) 0:01:24.711 ********** 2026-03-09 01:10:27.880579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:10:27.880590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:10:27.880613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:10:27.880629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:10:27.880639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:10:27.880649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:10:27.880660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:10:27.880684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:10:27.880710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:10:27.880727 | orchestrator | 2026-03-09 01:10:27.880742 | orchestrator | TASK [service-check-containers : barbican | Notify handlers to restart containers] *** 2026-03-09 01:10:27.880758 | orchestrator | Monday 09 March 2026 01:09:29 +0000 (0:00:04.196) 0:01:28.908 ********** 2026-03-09 01:10:27.880775 | orchestrator | changed: [testbed-node-0] => { 2026-03-09 01:10:27.880790 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:10:27.880800 | orchestrator | } 2026-03-09 01:10:27.880810 | orchestrator | changed: [testbed-node-1] => { 2026-03-09 01:10:27.880820 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:10:27.880830 | orchestrator | } 2026-03-09 01:10:27.880839 | orchestrator | changed: [testbed-node-2] => { 2026-03-09 01:10:27.880849 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:10:27.880859 | orchestrator | } 2026-03-09 01:10:27.880868 | orchestrator | 2026-03-09 01:10:27.880883 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-09 01:10:27.880893 | orchestrator | Monday 09 March 2026 01:09:29 +0000 (0:00:00.429) 0:01:29.337 ********** 2026-03-09 01:10:27.880904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:10:27.880915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 01:10:27.880933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:10:27.880943 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:10:27.880960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:10:27.880976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 01:10:27.880986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:10:27.880996 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:10:27.881007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:10:27.881024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 01:10:27.881034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:10:27.881044 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:10:27.881061 | orchestrator | 2026-03-09 01:10:27.881078 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-09 01:10:27.881094 | orchestrator | Monday 09 March 2026 01:09:31 +0000 (0:00:02.017) 0:01:31.355 ********** 2026-03-09 01:10:27.881109 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:10:27.881125 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:10:27.881141 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:10:27.881156 | orchestrator | 2026-03-09 01:10:27.881172 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-03-09 01:10:27.881198 | orchestrator | Monday 09 March 2026 01:09:32 +0000 (0:00:00.881) 0:01:32.236 ********** 2026-03-09 01:10:27.881214 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:10:27.881230 | orchestrator | 2026-03-09 01:10:27.881247 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-03-09 01:10:27.881262 | orchestrator | Monday 09 March 2026 01:09:35 +0000 (0:00:02.550) 0:01:34.787 ********** 2026-03-09 01:10:27.881278 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:10:27.881294 | orchestrator | 2026-03-09 01:10:27.881309 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-03-09 01:10:27.881326 | orchestrator | Monday 09 March 2026 01:09:37 +0000 (0:00:02.675) 0:01:37.462 ********** 2026-03-09 01:10:27.881343 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:10:27.881353 | orchestrator | 2026-03-09 01:10:27.881363 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-09 01:10:27.881372 | orchestrator | Monday 09 March 2026 01:09:51 +0000 (0:00:13.900) 0:01:51.363 ********** 2026-03-09 01:10:27.881382 | orchestrator | 2026-03-09 01:10:27.881398 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-09 01:10:27.881408 | orchestrator | Monday 09 March 2026 01:09:51 +0000 (0:00:00.298) 0:01:51.662 ********** 2026-03-09 01:10:27.881481 | orchestrator | 2026-03-09 01:10:27.881492 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-09 01:10:27.881502 | orchestrator | Monday 09 March 2026 01:09:52 +0000 (0:00:00.319) 0:01:51.981 ********** 2026-03-09 01:10:27.881512 | orchestrator | 2026-03-09 01:10:27.881521 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-03-09 01:10:27.881531 | orchestrator | Monday 09 March 2026 01:09:52 +0000 (0:00:00.271) 0:01:52.253 ********** 2026-03-09 01:10:27.881541 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:10:27.881559 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:10:27.881569 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:10:27.881579 | orchestrator | 2026-03-09 01:10:27.881589 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-03-09 01:10:27.881598 | orchestrator | Monday 09 March 2026 01:10:04 +0000 (0:00:12.046) 0:02:04.300 ********** 2026-03-09 01:10:27.881608 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:10:27.881618 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:10:27.881627 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:10:27.881637 | orchestrator | 2026-03-09 01:10:27.881646 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-03-09 01:10:27.881656 | orchestrator | Monday 09 March 2026 01:10:17 +0000 (0:00:12.798) 0:02:17.098 ********** 2026-03-09 01:10:27.881663 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:10:27.881671 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:10:27.881679 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:10:27.881687 | orchestrator | 2026-03-09 01:10:27.881695 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:10:27.881705 | orchestrator | testbed-node-0 : ok=25  changed=19  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-09 01:10:27.881714 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 01:10:27.881722 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 01:10:27.881729 | orchestrator | 2026-03-09 01:10:27.881737 | orchestrator | 2026-03-09 01:10:27.881745 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:10:27.881753 | orchestrator | Monday 09 March 2026 01:10:26 +0000 (0:00:09.272) 0:02:26.371 ********** 2026-03-09 01:10:27.881761 | orchestrator | =============================================================================== 2026-03-09 01:10:27.881769 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 19.32s 2026-03-09 01:10:27.881777 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 13.90s 2026-03-09 01:10:27.881785 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 13.53s 2026-03-09 01:10:27.881793 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 12.80s 2026-03-09 01:10:27.881801 | orchestrator | barbican : Restart barbican-api container ------------------------------ 12.05s 2026-03-09 01:10:27.881808 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 9.27s 2026-03-09 01:10:27.881816 | orchestrator | service-ks-register : barbican | Creating/deleting endpoints ------------ 6.70s 2026-03-09 01:10:27.881824 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.85s 2026-03-09 01:10:27.881832 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.85s 2026-03-09 01:10:27.881840 | orchestrator | service-ks-register : barbican | Granting/revoking user roles ----------- 4.46s 2026-03-09 01:10:27.881848 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.20s 2026-03-09 01:10:27.881856 | orchestrator | service-check-containers : barbican | Check containers ------------------ 4.20s 2026-03-09 01:10:27.881863 | orchestrator | service-ks-register : barbican | Creating/deleting services ------------- 3.99s 2026-03-09 01:10:27.881872 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.61s 2026-03-09 01:10:27.881880 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 3.12s 2026-03-09 01:10:27.881887 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.68s 2026-03-09 01:10:27.881895 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.55s 2026-03-09 01:10:27.881903 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.35s 2026-03-09 01:10:27.881934 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 2.02s 2026-03-09 01:10:27.881943 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.02s 2026-03-09 01:10:27.881951 | orchestrator | 2026-03-09 01:10:27 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:10:27.881959 | orchestrator | 2026-03-09 01:10:27 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:10:27.883225 | orchestrator | 2026-03-09 01:10:27 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:10:27.883293 | orchestrator | 2026-03-09 01:10:27 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:30.923614 | orchestrator | 2026-03-09 01:10:30 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:10:30.924001 | orchestrator | 2026-03-09 01:10:30 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:10:30.925128 | orchestrator | 2026-03-09 01:10:30 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:10:30.926613 | orchestrator | 2026-03-09 01:10:30 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:10:30.926676 | orchestrator | 2026-03-09 01:10:30 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:33.959089 | orchestrator | 2026-03-09 01:10:33 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:10:33.959563 | orchestrator | 2026-03-09 01:10:33 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:10:33.960326 | orchestrator | 2026-03-09 01:10:33 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:10:33.961207 | orchestrator | 2026-03-09 01:10:33 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:10:33.961228 | orchestrator | 2026-03-09 01:10:33 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:37.023305 | orchestrator | 2026-03-09 01:10:37 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:10:37.028202 | orchestrator | 2026-03-09 01:10:37 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:10:37.029847 | orchestrator | 2026-03-09 01:10:37 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:10:37.031961 | orchestrator | 2026-03-09 01:10:37 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:10:37.032019 | orchestrator | 2026-03-09 01:10:37 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:40.082897 | orchestrator | 2026-03-09 01:10:40 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:10:40.083831 | orchestrator | 2026-03-09 01:10:40 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:10:40.085122 | orchestrator | 2026-03-09 01:10:40 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:10:40.086728 | orchestrator | 2026-03-09 01:10:40 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:10:40.086890 | orchestrator | 2026-03-09 01:10:40 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:43.137752 | orchestrator | 2026-03-09 01:10:43 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:10:43.139511 | orchestrator | 2026-03-09 01:10:43 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:10:43.141543 | orchestrator | 2026-03-09 01:10:43 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:10:43.142173 | orchestrator | 2026-03-09 01:10:43 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:10:43.142222 | orchestrator | 2026-03-09 01:10:43 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:46.187086 | orchestrator | 2026-03-09 01:10:46 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:10:46.188645 | orchestrator | 2026-03-09 01:10:46 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:10:46.189934 | orchestrator | 2026-03-09 01:10:46 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:10:46.191667 | orchestrator | 2026-03-09 01:10:46 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:10:46.191711 | orchestrator | 2026-03-09 01:10:46 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:49.231627 | orchestrator | 2026-03-09 01:10:49 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:10:49.232462 | orchestrator | 2026-03-09 01:10:49 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:10:49.237009 | orchestrator | 2026-03-09 01:10:49 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:10:49.237238 | orchestrator | 2026-03-09 01:10:49 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:10:49.238045 | orchestrator | 2026-03-09 01:10:49 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:52.307784 | orchestrator | 2026-03-09 01:10:52 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:10:52.308807 | orchestrator | 2026-03-09 01:10:52 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:10:52.311298 | orchestrator | 2026-03-09 01:10:52 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:10:52.313080 | orchestrator | 2026-03-09 01:10:52 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:10:52.313178 | orchestrator | 2026-03-09 01:10:52 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:55.360525 | orchestrator | 2026-03-09 01:10:55 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:10:55.362274 | orchestrator | 2026-03-09 01:10:55 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:10:55.363807 | orchestrator | 2026-03-09 01:10:55 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:10:55.365060 | orchestrator | 2026-03-09 01:10:55 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:10:55.365117 | orchestrator | 2026-03-09 01:10:55 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:58.402058 | orchestrator | 2026-03-09 01:10:58 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:10:58.403377 | orchestrator | 2026-03-09 01:10:58 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:10:58.404699 | orchestrator | 2026-03-09 01:10:58 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:10:58.406598 | orchestrator | 2026-03-09 01:10:58 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:10:58.406636 | orchestrator | 2026-03-09 01:10:58 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:11:01.446904 | orchestrator | 2026-03-09 01:11:01 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:11:01.448710 | orchestrator | 2026-03-09 01:11:01 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:11:01.451268 | orchestrator | 2026-03-09 01:11:01 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:11:01.453090 | orchestrator | 2026-03-09 01:11:01 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:11:01.453184 | orchestrator | 2026-03-09 01:11:01 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:11:04.497929 | orchestrator | 2026-03-09 01:11:04 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:11:04.498144 | orchestrator | 2026-03-09 01:11:04 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:11:04.498591 | orchestrator | 2026-03-09 01:11:04 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:11:04.499377 | orchestrator | 2026-03-09 01:11:04 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:11:04.499466 | orchestrator | 2026-03-09 01:11:04 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:11:07.541775 | orchestrator | 2026-03-09 01:11:07 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:11:07.542703 | orchestrator | 2026-03-09 01:11:07 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:11:07.544488 | orchestrator | 2026-03-09 01:11:07 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:11:07.546177 | orchestrator | 2026-03-09 01:11:07 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:11:07.546208 | orchestrator | 2026-03-09 01:11:07 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:11:10.588147 | orchestrator | 2026-03-09 01:11:10 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:11:10.588952 | orchestrator | 2026-03-09 01:11:10 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:11:10.590419 | orchestrator | 2026-03-09 01:11:10 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:11:10.592556 | orchestrator | 2026-03-09 01:11:10 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:11:10.593039 | orchestrator | 2026-03-09 01:11:10 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:11:13.649988 | orchestrator | 2026-03-09 01:11:13 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:11:13.650733 | orchestrator | 2026-03-09 01:11:13 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:11:13.651797 | orchestrator | 2026-03-09 01:11:13 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:11:13.652680 | orchestrator | 2026-03-09 01:11:13 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:11:13.652748 | orchestrator | 2026-03-09 01:11:13 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:11:16.699502 | orchestrator | 2026-03-09 01:11:16 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:11:16.700623 | orchestrator | 2026-03-09 01:11:16 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:11:16.704617 | orchestrator | 2026-03-09 01:11:16 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:11:16.708522 | orchestrator | 2026-03-09 01:11:16 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:11:16.708596 | orchestrator | 2026-03-09 01:11:16 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:11:19.764542 | orchestrator | 2026-03-09 01:11:19 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:11:19.764638 | orchestrator | 2026-03-09 01:11:19 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:11:19.764652 | orchestrator | 2026-03-09 01:11:19 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:11:19.764663 | orchestrator | 2026-03-09 01:11:19 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:11:19.764674 | orchestrator | 2026-03-09 01:11:19 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:11:22.804070 | orchestrator | 2026-03-09 01:11:22 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:11:22.804585 | orchestrator | 2026-03-09 01:11:22 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:11:22.805684 | orchestrator | 2026-03-09 01:11:22 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:11:22.807255 | orchestrator | 2026-03-09 01:11:22 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:11:22.807346 | orchestrator | 2026-03-09 01:11:22 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:11:25.845320 | orchestrator | 2026-03-09 01:11:25 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:11:25.846972 | orchestrator | 2026-03-09 01:11:25 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:11:25.847816 | orchestrator | 2026-03-09 01:11:25 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:11:25.848811 | orchestrator | 2026-03-09 01:11:25 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:11:25.848860 | orchestrator | 2026-03-09 01:11:25 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:11:28.895432 | orchestrator | 2026-03-09 01:11:28 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:11:28.902356 | orchestrator | 2026-03-09 01:11:28 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:11:28.906707 | orchestrator | 2026-03-09 01:11:28 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:11:28.911909 | orchestrator | 2026-03-09 01:11:28 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:11:28.912009 | orchestrator | 2026-03-09 01:11:28 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:11:31.957019 | orchestrator | 2026-03-09 01:11:31 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:11:31.958691 | orchestrator | 2026-03-09 01:11:31 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:11:31.961042 | orchestrator | 2026-03-09 01:11:31 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:11:31.962396 | orchestrator | 2026-03-09 01:11:31 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:11:31.962523 | orchestrator | 2026-03-09 01:11:31 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:11:35.008130 | orchestrator | 2026-03-09 01:11:35 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:11:35.009010 | orchestrator | 2026-03-09 01:11:35 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:11:35.010483 | orchestrator | 2026-03-09 01:11:35 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:11:35.012081 | orchestrator | 2026-03-09 01:11:35 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:11:35.012202 | orchestrator | 2026-03-09 01:11:35 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:11:38.071956 | orchestrator | 2026-03-09 01:11:38 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:11:38.072698 | orchestrator | 2026-03-09 01:11:38 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:11:38.073756 | orchestrator | 2026-03-09 01:11:38 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:11:38.075089 | orchestrator | 2026-03-09 01:11:38 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:11:38.075127 | orchestrator | 2026-03-09 01:11:38 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:11:41.124205 | orchestrator | 2026-03-09 01:11:41 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:11:41.126061 | orchestrator | 2026-03-09 01:11:41 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:11:41.128729 | orchestrator | 2026-03-09 01:11:41 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:11:41.132040 | orchestrator | 2026-03-09 01:11:41 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:11:41.132087 | orchestrator | 2026-03-09 01:11:41 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:11:44.179535 | orchestrator | 2026-03-09 01:11:44 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:11:44.180357 | orchestrator | 2026-03-09 01:11:44 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:11:44.181579 | orchestrator | 2026-03-09 01:11:44 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:11:44.182526 | orchestrator | 2026-03-09 01:11:44 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:11:44.182592 | orchestrator | 2026-03-09 01:11:44 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:11:47.219916 | orchestrator | 2026-03-09 01:11:47 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:11:47.221107 | orchestrator | 2026-03-09 01:11:47 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:11:47.221889 | orchestrator | 2026-03-09 01:11:47 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:11:47.223177 | orchestrator | 2026-03-09 01:11:47 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:11:47.223207 | orchestrator | 2026-03-09 01:11:47 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:11:50.266981 | orchestrator | 2026-03-09 01:11:50 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:11:50.271423 | orchestrator | 2026-03-09 01:11:50 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:11:50.272712 | orchestrator | 2026-03-09 01:11:50 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:11:50.274334 | orchestrator | 2026-03-09 01:11:50 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:11:50.274370 | orchestrator | 2026-03-09 01:11:50 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:11:53.316812 | orchestrator | 2026-03-09 01:11:53 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:11:53.317481 | orchestrator | 2026-03-09 01:11:53 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:11:53.318224 | orchestrator | 2026-03-09 01:11:53 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:11:53.319442 | orchestrator | 2026-03-09 01:11:53 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:11:53.319516 | orchestrator | 2026-03-09 01:11:53 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:11:56.366290 | orchestrator | 2026-03-09 01:11:56 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:11:56.368418 | orchestrator | 2026-03-09 01:11:56 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:11:56.371189 | orchestrator | 2026-03-09 01:11:56 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:11:56.371914 | orchestrator | 2026-03-09 01:11:56 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state STARTED 2026-03-09 01:11:56.371946 | orchestrator | 2026-03-09 01:11:56 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:11:59.418778 | orchestrator | 2026-03-09 01:11:59 | INFO  | Task fd48a228-2c95-40dd-9a32-64318f40f4c0 is in state STARTED 2026-03-09 01:11:59.420154 | orchestrator | 2026-03-09 01:11:59 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:11:59.420203 | orchestrator | 2026-03-09 01:11:59 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:11:59.420956 | orchestrator | 2026-03-09 01:11:59 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:11:59.423674 | orchestrator | 2026-03-09 01:11:59 | INFO  | Task 499632bb-d28f-4485-af33-5527f485229f is in state SUCCESS 2026-03-09 01:11:59.425029 | orchestrator | 2026-03-09 01:11:59.425083 | orchestrator | 2026-03-09 01:11:59.425113 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:11:59.425134 | orchestrator | 2026-03-09 01:11:59.425154 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:11:59.425172 | orchestrator | Monday 09 March 2026 01:08:27 +0000 (0:00:00.356) 0:00:00.356 ********** 2026-03-09 01:11:59.425189 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:11:59.425207 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:11:59.425226 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:11:59.425243 | orchestrator | 2026-03-09 01:11:59.425262 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:11:59.425280 | orchestrator | Monday 09 March 2026 01:08:28 +0000 (0:00:00.330) 0:00:00.686 ********** 2026-03-09 01:11:59.425292 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-03-09 01:11:59.425304 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-03-09 01:11:59.425315 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-03-09 01:11:59.425326 | orchestrator | 2026-03-09 01:11:59.425337 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-03-09 01:11:59.425348 | orchestrator | 2026-03-09 01:11:59.425416 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-09 01:11:59.425428 | orchestrator | Monday 09 March 2026 01:08:28 +0000 (0:00:00.460) 0:00:01.147 ********** 2026-03-09 01:11:59.425440 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:11:59.425484 | orchestrator | 2026-03-09 01:11:59.425504 | orchestrator | TASK [service-ks-register : designate | Creating/deleting services] ************ 2026-03-09 01:11:59.425516 | orchestrator | Monday 09 March 2026 01:08:29 +0000 (0:00:00.605) 0:00:01.752 ********** 2026-03-09 01:11:59.425527 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-03-09 01:11:59.425538 | orchestrator | 2026-03-09 01:11:59.425637 | orchestrator | TASK [service-ks-register : designate | Creating/deleting endpoints] *********** 2026-03-09 01:11:59.425722 | orchestrator | Monday 09 March 2026 01:08:33 +0000 (0:00:04.116) 0:00:05.868 ********** 2026-03-09 01:11:59.425738 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-03-09 01:11:59.425752 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-03-09 01:11:59.425765 | orchestrator | 2026-03-09 01:11:59.425778 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-03-09 01:11:59.425792 | orchestrator | Monday 09 March 2026 01:08:40 +0000 (0:00:07.442) 0:00:13.310 ********** 2026-03-09 01:11:59.425889 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-09 01:11:59.425901 | orchestrator | 2026-03-09 01:11:59.425912 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-03-09 01:11:59.425923 | orchestrator | Monday 09 March 2026 01:08:44 +0000 (0:00:04.022) 0:00:17.332 ********** 2026-03-09 01:11:59.425934 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-03-09 01:11:59.425945 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-09 01:11:59.425956 | orchestrator | 2026-03-09 01:11:59.425966 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-03-09 01:11:59.425977 | orchestrator | Monday 09 March 2026 01:08:49 +0000 (0:00:04.727) 0:00:22.060 ********** 2026-03-09 01:11:59.425988 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-09 01:11:59.426000 | orchestrator | 2026-03-09 01:11:59.426010 | orchestrator | TASK [service-ks-register : designate | Granting/revoking user roles] ********** 2026-03-09 01:11:59.426085 | orchestrator | Monday 09 March 2026 01:08:54 +0000 (0:00:04.540) 0:00:26.600 ********** 2026-03-09 01:11:59.426098 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-03-09 01:11:59.426110 | orchestrator | 2026-03-09 01:11:59.426121 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-03-09 01:11:59.426132 | orchestrator | Monday 09 March 2026 01:08:58 +0000 (0:00:04.605) 0:00:31.205 ********** 2026-03-09 01:11:59.426167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:11:59.426207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:11:59.426231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:11:59.426244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:11:59.426320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:11:59.426353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:11:59.426386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.426407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.426440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.426490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.426512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.426532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.426561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.426595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.426629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.426642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.426660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.426679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.426698 | orchestrator | 2026-03-09 01:11:59.426717 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-03-09 01:11:59.426736 | orchestrator | Monday 09 March 2026 01:09:03 +0000 (0:00:04.402) 0:00:35.608 ********** 2026-03-09 01:11:59.426755 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:11:59.426773 | orchestrator | 2026-03-09 01:11:59.426794 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-03-09 01:11:59.426811 | orchestrator | Monday 09 March 2026 01:09:03 +0000 (0:00:00.341) 0:00:35.949 ********** 2026-03-09 01:11:59.426831 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:11:59.426843 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:11:59.426854 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:11:59.426865 | orchestrator | 2026-03-09 01:11:59.426876 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-09 01:11:59.426897 | orchestrator | Monday 09 March 2026 01:09:04 +0000 (0:00:00.897) 0:00:36.847 ********** 2026-03-09 01:11:59.426916 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:11:59.426933 | orchestrator | 2026-03-09 01:11:59.426951 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-03-09 01:11:59.426970 | orchestrator | Monday 09 March 2026 01:09:05 +0000 (0:00:01.098) 0:00:37.945 ********** 2026-03-09 01:11:59.427003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:11:59.427098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:11:59.427139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:11:59.427153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:11:59.427218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:11:59.427271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:11:59.427296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.427314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.427360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.427379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.427422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.427443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.427581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.427597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.427610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.427630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.427655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.427693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.427734 | orchestrator | 2026-03-09 01:11:59.427751 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-03-09 01:11:59.427770 | orchestrator | Monday 09 March 2026 01:09:12 +0000 (0:00:07.366) 0:00:45.311 ********** 2026-03-09 01:11:59.427803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:11:59.427825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:11:59.427844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 01:11:59.427863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.427890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 01:11:59.427924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.427957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.427977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:11:59.427996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.428016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 01:11:59.428059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.428080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.428101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.428112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.428124 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:11:59.428136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.428147 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:11:59.428159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.428171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.428197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.428209 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:11:59.428220 | orchestrator | 2026-03-09 01:11:59.428231 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-03-09 01:11:59.428242 | orchestrator | Monday 09 March 2026 01:09:17 +0000 (0:00:04.097) 0:00:49.409 ********** 2026-03-09 01:11:59.428265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:11:59.428277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:11:59.428289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 01:11:59.428309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 01:11:59.428326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.428344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.428356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.428368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.428379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:11:59.428398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 01:11:59.428415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.428435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.428476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.428498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.428512 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:11:59.428523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.428545 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:11:59.428557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.428575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.428588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.428599 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:11:59.428610 | orchestrator | 2026-03-09 01:11:59.428621 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-03-09 01:11:59.428639 | orchestrator | Monday 09 March 2026 01:09:21 +0000 (0:00:04.871) 0:00:54.281 ********** 2026-03-09 01:11:59.428651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:11:59.428663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:11:59.428683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:11:59.428702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:11:59.428721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:11:59.428733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:11:59.428744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.428756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.428783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.428802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.428826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.428857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.428876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.428887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.428906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.428924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.428962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.428984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.429002 | orchestrator | 2026-03-09 01:11:59.429019 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-03-09 01:11:59.429048 | orchestrator | Monday 09 March 2026 01:09:29 +0000 (0:00:08.000) 0:01:02.281 ********** 2026-03-09 01:11:59.429067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:11:59.429087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:11:59.429119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:11:59.429148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:11:59.429180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:11:59.429192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:11:59.429204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.429225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.429237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.429254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.429266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.429286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.429298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.429318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.429330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.429342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.429360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.429372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.429383 | orchestrator | 2026-03-09 01:11:59.429401 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-03-09 01:11:59.429413 | orchestrator | Monday 09 March 2026 01:09:53 +0000 (0:00:23.929) 0:01:26.211 ********** 2026-03-09 01:11:59.429424 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-09 01:11:59.429435 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-09 01:11:59.429476 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-09 01:11:59.429489 | orchestrator | 2026-03-09 01:11:59.429505 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-03-09 01:11:59.429516 | orchestrator | Monday 09 March 2026 01:10:00 +0000 (0:00:07.067) 0:01:33.279 ********** 2026-03-09 01:11:59.429527 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-09 01:11:59.429538 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-09 01:11:59.429549 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-09 01:11:59.429560 | orchestrator | 2026-03-09 01:11:59.429571 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-03-09 01:11:59.429582 | orchestrator | Monday 09 March 2026 01:10:05 +0000 (0:00:04.099) 0:01:37.379 ********** 2026-03-09 01:11:59.429594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:11:59.429605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:11:59.429623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:11:59.429644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:11:59.429663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.429675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.429687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.429699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:11:59.429716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.430293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.430350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.430364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:11:59.430376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.430388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.430399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.430425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.430490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.430505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.430517 | orchestrator | 2026-03-09 01:11:59.430532 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-03-09 01:11:59.430551 | orchestrator | Monday 09 March 2026 01:10:09 +0000 (0:00:04.120) 0:01:41.499 ********** 2026-03-09 01:11:59.430570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:11:59.430589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:11:59.430623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:11:59.430655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:11:59.430675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.430694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.430713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.430734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:11:59.430761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.430799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.430811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.430823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:11:59.430834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.430846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.430865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.430891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.430904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.430918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.430931 | orchestrator | 2026-03-09 01:11:59.430944 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-09 01:11:59.430956 | orchestrator | Monday 09 March 2026 01:10:12 +0000 (0:00:03.866) 0:01:45.365 ********** 2026-03-09 01:11:59.430969 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:11:59.430982 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:11:59.430994 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:11:59.431007 | orchestrator | 2026-03-09 01:11:59.431019 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-03-09 01:11:59.431031 | orchestrator | Monday 09 March 2026 01:10:13 +0000 (0:00:00.744) 0:01:46.109 ********** 2026-03-09 01:11:59.431044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:11:59.431058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 01:11:59.431085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.431106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.431120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.431133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.431152 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:11:59.431177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:11:59.431205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 01:11:59.431244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.431275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.431292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.431309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.431326 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:11:59.431344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:11:59.431374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 01:11:59.431401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.431432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.431517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.431531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.431543 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:11:59.431554 | orchestrator | 2026-03-09 01:11:59.431565 | orchestrator | TASK [service-check-containers : designate | Check containers] ***************** 2026-03-09 01:11:59.431576 | orchestrator | Monday 09 March 2026 01:10:14 +0000 (0:00:01.167) 0:01:47.276 ********** 2026-03-09 01:11:59.431587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:11:59.431619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:11:59.431639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:11:59.431651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:11:59.431664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:11:59.431676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:11:59.431694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.431711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.431729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.431741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.431753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.431764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.431783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.431801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.431813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.431830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.431842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.431853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:59.431864 | orchestrator | 2026-03-09 01:11:59.431875 | orchestrator | TASK [service-check-containers : designate | Notify handlers to restart containers] *** 2026-03-09 01:11:59.431894 | orchestrator | Monday 09 March 2026 01:10:22 +0000 (0:00:07.492) 0:01:54.769 ********** 2026-03-09 01:11:59.431905 | orchestrator | changed: [testbed-node-0] => { 2026-03-09 01:11:59.431916 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:11:59.431928 | orchestrator | } 2026-03-09 01:11:59.431939 | orchestrator | changed: [testbed-node-1] => { 2026-03-09 01:11:59.431950 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:11:59.431961 | orchestrator | } 2026-03-09 01:11:59.431972 | orchestrator | changed: [testbed-node-2] => { 2026-03-09 01:11:59.431983 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:11:59.431994 | orchestrator | } 2026-03-09 01:11:59.432005 | orchestrator | 2026-03-09 01:11:59.432016 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-09 01:11:59.432027 | orchestrator | Monday 09 March 2026 01:10:23 +0000 (0:00:00.635) 0:01:55.404 ********** 2026-03-09 01:11:59.432038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:11:59.432054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 01:11:59.432070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.432081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.432091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.432108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.432118 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:11:59.432128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:11:59.432143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 01:11:59.432161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.432171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.432188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.432198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.432208 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:11:59.432218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:11:59.432233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 01:11:59.432250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.432260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.432277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.432287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:11:59.432299 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:11:59.432315 | orchestrator | 2026-03-09 01:11:59.432332 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-09 01:11:59.432354 | orchestrator | Monday 09 March 2026 01:10:25 +0000 (0:00:02.649) 0:01:58.054 ********** 2026-03-09 01:11:59.432373 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:11:59.432389 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:11:59.432406 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:11:59.432422 | orchestrator | 2026-03-09 01:11:59.432439 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-03-09 01:11:59.432484 | orchestrator | Monday 09 March 2026 01:10:26 +0000 (0:00:00.369) 0:01:58.424 ********** 2026-03-09 01:11:59.432495 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-03-09 01:11:59.432505 | orchestrator | 2026-03-09 01:11:59.432515 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-03-09 01:11:59.432525 | orchestrator | Monday 09 March 2026 01:10:28 +0000 (0:00:02.394) 0:02:00.818 ********** 2026-03-09 01:11:59.432534 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-09 01:11:59.432545 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-03-09 01:11:59.432555 | orchestrator | 2026-03-09 01:11:59.432565 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-03-09 01:11:59.432574 | orchestrator | Monday 09 March 2026 01:10:30 +0000 (0:00:02.532) 0:02:03.351 ********** 2026-03-09 01:11:59.432591 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:11:59.432601 | orchestrator | 2026-03-09 01:11:59.432611 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-09 01:11:59.432620 | orchestrator | Monday 09 March 2026 01:10:48 +0000 (0:00:17.591) 0:02:20.942 ********** 2026-03-09 01:11:59.432630 | orchestrator | 2026-03-09 01:11:59.432640 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-09 01:11:59.432649 | orchestrator | Monday 09 March 2026 01:10:48 +0000 (0:00:00.068) 0:02:21.011 ********** 2026-03-09 01:11:59.432659 | orchestrator | 2026-03-09 01:11:59.432668 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-09 01:11:59.432678 | orchestrator | Monday 09 March 2026 01:10:48 +0000 (0:00:00.068) 0:02:21.080 ********** 2026-03-09 01:11:59.432687 | orchestrator | 2026-03-09 01:11:59.432697 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-03-09 01:11:59.432723 | orchestrator | Monday 09 March 2026 01:10:48 +0000 (0:00:00.099) 0:02:21.179 ********** 2026-03-09 01:11:59.432733 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:11:59.432743 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:11:59.432753 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:11:59.432762 | orchestrator | 2026-03-09 01:11:59.432772 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-03-09 01:11:59.432783 | orchestrator | Monday 09 March 2026 01:11:02 +0000 (0:00:13.906) 0:02:35.085 ********** 2026-03-09 01:11:59.432800 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:11:59.432815 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:11:59.432830 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:11:59.432845 | orchestrator | 2026-03-09 01:11:59.432861 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-03-09 01:11:59.432875 | orchestrator | Monday 09 March 2026 01:11:11 +0000 (0:00:08.994) 0:02:44.080 ********** 2026-03-09 01:11:59.432889 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:11:59.432906 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:11:59.432922 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:11:59.432938 | orchestrator | 2026-03-09 01:11:59.432955 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-03-09 01:11:59.432971 | orchestrator | Monday 09 March 2026 01:11:18 +0000 (0:00:06.659) 0:02:50.739 ********** 2026-03-09 01:11:59.432987 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:11:59.433000 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:11:59.433011 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:11:59.433020 | orchestrator | 2026-03-09 01:11:59.433030 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-03-09 01:11:59.433040 | orchestrator | Monday 09 March 2026 01:11:31 +0000 (0:00:13.574) 0:03:04.313 ********** 2026-03-09 01:11:59.433049 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:11:59.433059 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:11:59.433068 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:11:59.433078 | orchestrator | 2026-03-09 01:11:59.433087 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-03-09 01:11:59.433097 | orchestrator | Monday 09 March 2026 01:11:41 +0000 (0:00:09.307) 0:03:13.621 ********** 2026-03-09 01:11:59.433107 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:11:59.433116 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:11:59.433126 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:11:59.433136 | orchestrator | 2026-03-09 01:11:59.433145 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-03-09 01:11:59.433155 | orchestrator | Monday 09 March 2026 01:11:47 +0000 (0:00:06.515) 0:03:20.137 ********** 2026-03-09 01:11:59.433164 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:11:59.433174 | orchestrator | 2026-03-09 01:11:59.433183 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:11:59.433193 | orchestrator | testbed-node-0 : ok=30  changed=24  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-09 01:11:59.433204 | orchestrator | testbed-node-1 : ok=20  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 01:11:59.433214 | orchestrator | testbed-node-2 : ok=20  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 01:11:59.433224 | orchestrator | 2026-03-09 01:11:59.433234 | orchestrator | 2026-03-09 01:11:59.433243 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:11:59.433253 | orchestrator | Monday 09 March 2026 01:11:55 +0000 (0:00:07.842) 0:03:27.980 ********** 2026-03-09 01:11:59.433262 | orchestrator | =============================================================================== 2026-03-09 01:11:59.433272 | orchestrator | designate : Copying over designate.conf -------------------------------- 23.93s 2026-03-09 01:11:59.433281 | orchestrator | designate : Running Designate bootstrap container ---------------------- 17.59s 2026-03-09 01:11:59.433304 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 13.91s 2026-03-09 01:11:59.433314 | orchestrator | designate : Restart designate-producer container ----------------------- 13.57s 2026-03-09 01:11:59.433324 | orchestrator | designate : Restart designate-mdns container ---------------------------- 9.31s 2026-03-09 01:11:59.433333 | orchestrator | designate : Restart designate-api container ----------------------------- 8.99s 2026-03-09 01:11:59.433342 | orchestrator | designate : Copying over config.json files for services ----------------- 8.00s 2026-03-09 01:11:59.433352 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.84s 2026-03-09 01:11:59.433362 | orchestrator | service-check-containers : designate | Check containers ----------------- 7.49s 2026-03-09 01:11:59.433371 | orchestrator | service-ks-register : designate | Creating/deleting endpoints ----------- 7.44s 2026-03-09 01:11:59.433381 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 7.37s 2026-03-09 01:11:59.433399 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 7.07s 2026-03-09 01:11:59.433409 | orchestrator | designate : Restart designate-central container ------------------------- 6.66s 2026-03-09 01:11:59.433419 | orchestrator | designate : Restart designate-worker container -------------------------- 6.52s 2026-03-09 01:11:59.433428 | orchestrator | service-cert-copy : designate | Copying over backend internal TLS key --- 4.87s 2026-03-09 01:11:59.433438 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.73s 2026-03-09 01:11:59.433472 | orchestrator | service-ks-register : designate | Granting/revoking user roles ---------- 4.61s 2026-03-09 01:11:59.433490 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 4.54s 2026-03-09 01:11:59.433500 | orchestrator | designate : Ensuring config directories exist --------------------------- 4.40s 2026-03-09 01:11:59.433510 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 4.12s 2026-03-09 01:11:59.433526 | orchestrator | 2026-03-09 01:11:59 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:12:02.465991 | orchestrator | 2026-03-09 01:12:02 | INFO  | Task fd48a228-2c95-40dd-9a32-64318f40f4c0 is in state STARTED 2026-03-09 01:12:02.468695 | orchestrator | 2026-03-09 01:12:02 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:12:02.472636 | orchestrator | 2026-03-09 01:12:02 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:12:02.478754 | orchestrator | 2026-03-09 01:12:02 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:12:02.478842 | orchestrator | 2026-03-09 01:12:02 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:12:05.536622 | orchestrator | 2026-03-09 01:12:05 | INFO  | Task fd48a228-2c95-40dd-9a32-64318f40f4c0 is in state STARTED 2026-03-09 01:12:05.538476 | orchestrator | 2026-03-09 01:12:05 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state STARTED 2026-03-09 01:14:05.651563 | orchestrator | 2026-03-09 01:14:05 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:14:05.651780 | orchestrator | 2026-03-09 01:14:05 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:14:05.652150 | orchestrator | 2026-03-09 01:14:05 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:08.698783 | orchestrator | 2026-03-09 01:14:08 | INFO  | Task fd48a228-2c95-40dd-9a32-64318f40f4c0 is in state SUCCESS 2026-03-09 01:14:08.699998 | orchestrator | 2026-03-09 01:14:08.700035 | orchestrator | 2026-03-09 01:14:08.700044 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:14:08.700053 | orchestrator | 2026-03-09 01:14:08.700061 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:14:08.700069 | orchestrator | Monday 09 March 2026 01:12:04 +0000 (0:00:00.343) 0:00:00.343 ********** 2026-03-09 01:14:08.700101 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:14:08.700110 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:14:08.700117 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:14:08.700124 | orchestrator | 2026-03-09 01:14:08.700132 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:14:08.700139 | orchestrator | Monday 09 March 2026 01:12:04 +0000 (0:00:00.468) 0:00:00.811 ********** 2026-03-09 01:14:08.700147 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-03-09 01:14:08.700154 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-03-09 01:14:08.700162 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-03-09 01:14:08.700169 | orchestrator | 2026-03-09 01:14:08.700177 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-03-09 01:14:08.700184 | orchestrator | 2026-03-09 01:14:08.700191 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-09 01:14:08.700198 | orchestrator | Monday 09 March 2026 01:12:06 +0000 (0:00:01.637) 0:00:02.448 ********** 2026-03-09 01:14:08.700206 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:14:08.700214 | orchestrator | 2026-03-09 01:14:08.700221 | orchestrator | TASK [service-ks-register : placement | Creating/deleting services] ************ 2026-03-09 01:14:08.700228 | orchestrator | Monday 09 March 2026 01:12:07 +0000 (0:00:00.895) 0:00:03.343 ********** 2026-03-09 01:14:08.700235 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-03-09 01:14:08.700242 | orchestrator | 2026-03-09 01:14:08.700249 | orchestrator | TASK [service-ks-register : placement | Creating/deleting endpoints] *********** 2026-03-09 01:14:08.700257 | orchestrator | Monday 09 March 2026 01:12:11 +0000 (0:00:04.135) 0:00:07.478 ********** 2026-03-09 01:14:08.700264 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-03-09 01:14:08.700272 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-03-09 01:14:08.700279 | orchestrator | 2026-03-09 01:14:08.700286 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-03-09 01:14:08.700293 | orchestrator | Monday 09 March 2026 01:12:19 +0000 (0:00:08.172) 0:00:15.651 ********** 2026-03-09 01:14:08.700301 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-09 01:14:08.700376 | orchestrator | 2026-03-09 01:14:08.700385 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-03-09 01:14:08.700393 | orchestrator | Monday 09 March 2026 01:12:23 +0000 (0:00:03.900) 0:00:19.551 ********** 2026-03-09 01:14:08.700413 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-03-09 01:14:08.700421 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-09 01:14:08.700428 | orchestrator | 2026-03-09 01:14:08.700436 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-03-09 01:14:08.700443 | orchestrator | Monday 09 March 2026 01:12:27 +0000 (0:00:04.354) 0:00:23.906 ********** 2026-03-09 01:14:08.700450 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-09 01:14:08.700457 | orchestrator | 2026-03-09 01:14:08.700504 | orchestrator | TASK [service-ks-register : placement | Granting/revoking user roles] ********** 2026-03-09 01:14:08.700512 | orchestrator | Monday 09 March 2026 01:12:31 +0000 (0:00:03.686) 0:00:27.593 ********** 2026-03-09 01:14:08.700519 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-03-09 01:14:08.700526 | orchestrator | 2026-03-09 01:14:08.700533 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-09 01:14:08.700541 | orchestrator | Monday 09 March 2026 01:12:36 +0000 (0:00:04.650) 0:00:32.244 ********** 2026-03-09 01:14:08.700548 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:14:08.700555 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:14:08.700562 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:14:08.700576 | orchestrator | 2026-03-09 01:14:08.700584 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-03-09 01:14:08.700591 | orchestrator | Monday 09 March 2026 01:12:36 +0000 (0:00:00.472) 0:00:32.716 ********** 2026-03-09 01:14:08.700614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-09 01:14:08.700626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-09 01:14:08.700639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-09 01:14:08.700648 | orchestrator | 2026-03-09 01:14:08.700655 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-03-09 01:14:08.700662 | orchestrator | Monday 09 March 2026 01:12:37 +0000 (0:00:01.098) 0:00:33.815 ********** 2026-03-09 01:14:08.700670 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:14:08.700677 | orchestrator | 2026-03-09 01:14:08.700685 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-03-09 01:14:08.700692 | orchestrator | Monday 09 March 2026 01:12:37 +0000 (0:00:00.153) 0:00:33.968 ********** 2026-03-09 01:14:08.700699 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:14:08.700711 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:14:08.700719 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:14:08.700726 | orchestrator | 2026-03-09 01:14:08.700733 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-09 01:14:08.700740 | orchestrator | Monday 09 March 2026 01:12:38 +0000 (0:00:00.576) 0:00:34.544 ********** 2026-03-09 01:14:08.700748 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:14:08.700755 | orchestrator | 2026-03-09 01:14:08.700762 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-03-09 01:14:08.700770 | orchestrator | Monday 09 March 2026 01:12:39 +0000 (0:00:00.691) 0:00:35.236 ********** 2026-03-09 01:14:08.700778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-09 01:14:08.700794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-09 01:14:08.700803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-09 01:14:08.700811 | orchestrator | 2026-03-09 01:14:08.700833 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-03-09 01:14:08.700845 | orchestrator | Monday 09 March 2026 01:12:41 +0000 (0:00:02.405) 0:00:37.642 ********** 2026-03-09 01:14:08.700853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-09 01:14:08.700861 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:14:08.700875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-09 01:14:08.700883 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:14:08.700891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-09 01:14:08.700899 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:14:08.700906 | orchestrator | 2026-03-09 01:14:08.700913 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-03-09 01:14:08.700921 | orchestrator | Monday 09 March 2026 01:12:43 +0000 (0:00:01.825) 0:00:39.467 ********** 2026-03-09 01:14:08.700932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-09 01:14:08.700953 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:14:08.700961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-09 01:14:08.700968 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:14:08.700987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-09 01:14:08.701000 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:14:08.701013 | orchestrator | 2026-03-09 01:14:08.701025 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-03-09 01:14:08.701038 | orchestrator | Monday 09 March 2026 01:12:44 +0000 (0:00:01.356) 0:00:40.824 ********** 2026-03-09 01:14:08.701050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-09 01:14:08.701076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-09 01:14:08.701090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-09 01:14:08.701102 | orchestrator | 2026-03-09 01:14:08.701115 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-03-09 01:14:08.701127 | orchestrator | Monday 09 March 2026 01:12:46 +0000 (0:00:01.458) 0:00:42.283 ********** 2026-03-09 01:14:08.701148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-09 01:14:08.701165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-09 01:14:08.701193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-09 01:14:08.701212 | orchestrator | 2026-03-09 01:14:08.701227 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-03-09 01:14:08.701242 | orchestrator | Monday 09 March 2026 01:12:49 +0000 (0:00:03.589) 0:00:45.872 ********** 2026-03-09 01:14:08.701257 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-03-09 01:14:08.701273 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:14:08.701288 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-03-09 01:14:08.701303 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:14:08.701317 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-03-09 01:14:08.701331 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:14:08.701345 | orchestrator | 2026-03-09 01:14:08.701359 | orchestrator | TASK [Configure uWSGI for Placement] ******************************************* 2026-03-09 01:14:08.701373 | orchestrator | Monday 09 March 2026 01:12:50 +0000 (0:00:01.020) 0:00:46.893 ********** 2026-03-09 01:14:08.701387 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:14:08.701402 | orchestrator | 2026-03-09 01:14:08.701416 | orchestrator | TASK [service-uwsgi-config : Copying over placement-api uWSGI config] ********** 2026-03-09 01:14:08.701438 | orchestrator | Monday 09 March 2026 01:12:52 +0000 (0:00:01.962) 0:00:48.856 ********** 2026-03-09 01:14:08.701452 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:14:08.701493 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:14:08.701507 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:14:08.701521 | orchestrator | 2026-03-09 01:14:08.701535 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-03-09 01:14:08.701549 | orchestrator | Monday 09 March 2026 01:12:56 +0000 (0:00:03.834) 0:00:52.691 ********** 2026-03-09 01:14:08.701563 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:14:08.701576 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:14:08.701590 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:14:08.701604 | orchestrator | 2026-03-09 01:14:08.701618 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-03-09 01:14:08.701644 | orchestrator | Monday 09 March 2026 01:12:58 +0000 (0:00:01.785) 0:00:54.476 ********** 2026-03-09 01:14:08.701660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-09 01:14:08.701675 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:14:08.701697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-09 01:14:08.701713 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:14:08.701728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-09 01:14:08.701744 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:14:08.701757 | orchestrator | 2026-03-09 01:14:08.701770 | orchestrator | TASK [service-check-containers : placement | Check containers] ***************** 2026-03-09 01:14:08.701783 | orchestrator | Monday 09 March 2026 01:12:59 +0000 (0:00:01.237) 0:00:55.714 ********** 2026-03-09 01:14:08.701807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-09 01:14:08.701840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-09 01:14:08.701863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-09 01:14:08.701879 | orchestrator | 2026-03-09 01:14:08.701893 | orchestrator | TASK [service-check-containers : placement | Notify handlers to restart containers] *** 2026-03-09 01:14:08.701906 | orchestrator | Monday 09 March 2026 01:13:01 +0000 (0:00:02.306) 0:00:58.020 ********** 2026-03-09 01:14:08.701919 | orchestrator | changed: [testbed-node-0] => { 2026-03-09 01:14:08.701933 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:14:08.701947 | orchestrator | } 2026-03-09 01:14:08.701962 | orchestrator | changed: [testbed-node-1] => { 2026-03-09 01:14:08.701976 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:14:08.701990 | orchestrator | } 2026-03-09 01:14:08.702003 | orchestrator | changed: [testbed-node-2] => { 2026-03-09 01:14:08.702082 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:14:08.702102 | orchestrator | } 2026-03-09 01:14:08.702115 | orchestrator | 2026-03-09 01:14:08.702129 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-09 01:14:08.702143 | orchestrator | Monday 09 March 2026 01:13:02 +0000 (0:00:00.836) 0:00:58.856 ********** 2026-03-09 01:14:08.702169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-09 01:14:08.702196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-09 01:14:08.702213 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:14:08.702227 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:14:08.702248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-09 01:14:08.702264 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:14:08.702278 | orchestrator | 2026-03-09 01:14:08.702294 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-03-09 01:14:08.702308 | orchestrator | Monday 09 March 2026 01:13:03 +0000 (0:00:01.072) 0:00:59.929 ********** 2026-03-09 01:14:08.702324 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:14:08.702339 | orchestrator | 2026-03-09 01:14:08.702353 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-03-09 01:14:08.702368 | orchestrator | Monday 09 March 2026 01:13:06 +0000 (0:00:02.361) 0:01:02.290 ********** 2026-03-09 01:14:08.702382 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:14:08.702396 | orchestrator | 2026-03-09 01:14:08.702410 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-03-09 01:14:08.702426 | orchestrator | Monday 09 March 2026 01:13:08 +0000 (0:00:02.548) 0:01:04.839 ********** 2026-03-09 01:14:08.702450 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:14:08.702495 | orchestrator | 2026-03-09 01:14:08.702509 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-09 01:14:08.702522 | orchestrator | Monday 09 March 2026 01:13:24 +0000 (0:00:15.780) 0:01:20.619 ********** 2026-03-09 01:14:08.702536 | orchestrator | 2026-03-09 01:14:08.702551 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-09 01:14:08.702565 | orchestrator | Monday 09 March 2026 01:13:24 +0000 (0:00:00.073) 0:01:20.693 ********** 2026-03-09 01:14:08.702579 | orchestrator | 2026-03-09 01:14:08.702593 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-09 01:14:08.702607 | orchestrator | Monday 09 March 2026 01:13:24 +0000 (0:00:00.289) 0:01:20.982 ********** 2026-03-09 01:14:08.702620 | orchestrator | 2026-03-09 01:14:08.702634 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-03-09 01:14:08.702647 | orchestrator | Monday 09 March 2026 01:13:25 +0000 (0:00:00.082) 0:01:21.065 ********** 2026-03-09 01:14:08.702657 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:14:08.702665 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:14:08.702674 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:14:08.702682 | orchestrator | 2026-03-09 01:14:08.702698 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:14:08.702708 | orchestrator | testbed-node-0 : ok=23  changed=16  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-09 01:14:08.702721 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 01:14:08.702736 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 01:14:08.702750 | orchestrator | 2026-03-09 01:14:08.702765 | orchestrator | 2026-03-09 01:14:08.702779 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:14:08.702793 | orchestrator | Monday 09 March 2026 01:13:35 +0000 (0:00:10.784) 0:01:31.850 ********** 2026-03-09 01:14:08.702807 | orchestrator | =============================================================================== 2026-03-09 01:14:08.702821 | orchestrator | placement : Running placement bootstrap container ---------------------- 15.78s 2026-03-09 01:14:08.702835 | orchestrator | placement : Restart placement-api container ---------------------------- 10.78s 2026-03-09 01:14:08.702850 | orchestrator | service-ks-register : placement | Creating/deleting endpoints ----------- 8.17s 2026-03-09 01:14:08.702864 | orchestrator | service-ks-register : placement | Granting/revoking user roles ---------- 4.65s 2026-03-09 01:14:08.702879 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.35s 2026-03-09 01:14:08.702894 | orchestrator | service-ks-register : placement | Creating/deleting services ------------ 4.14s 2026-03-09 01:14:08.702907 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.90s 2026-03-09 01:14:08.702921 | orchestrator | service-uwsgi-config : Copying over placement-api uWSGI config ---------- 3.83s 2026-03-09 01:14:08.702930 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.69s 2026-03-09 01:14:08.702938 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.59s 2026-03-09 01:14:08.702947 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.55s 2026-03-09 01:14:08.702955 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 2.41s 2026-03-09 01:14:08.702964 | orchestrator | placement : Creating placement databases -------------------------------- 2.36s 2026-03-09 01:14:08.702972 | orchestrator | service-check-containers : placement | Check containers ----------------- 2.31s 2026-03-09 01:14:08.702981 | orchestrator | Configure uWSGI for Placement ------------------------------------------- 1.96s 2026-03-09 01:14:08.702995 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 1.83s 2026-03-09 01:14:08.703013 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.79s 2026-03-09 01:14:08.703022 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.64s 2026-03-09 01:14:08.703030 | orchestrator | placement : Copying over config.json files for services ----------------- 1.46s 2026-03-09 01:14:08.703039 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 1.36s 2026-03-09 01:14:08.704604 | orchestrator | 2026-03-09 01:14:08 | INFO  | Task efd60857-1a0f-492c-a61c-02e3033d1b89 is in state STARTED 2026-03-09 01:14:08.705620 | orchestrator | 2026-03-09 01:14:08 | INFO  | Task bf1507b1-7a66-44b9-a9af-2037365512c2 is in state STARTED 2026-03-09 01:14:08.709701 | orchestrator | 2026-03-09 01:14:08 | INFO  | Task 85f04564-2284-4041-8c62-fcbfc0e08dd1 is in state SUCCESS 2026-03-09 01:14:08.711318 | orchestrator | 2026-03-09 01:14:08.711359 | orchestrator | 2026-03-09 01:14:08.711369 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:14:08.711379 | orchestrator | 2026-03-09 01:14:08.711388 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:14:08.711397 | orchestrator | Monday 09 March 2026 01:07:32 +0000 (0:00:00.441) 0:00:00.441 ********** 2026-03-09 01:14:08.711406 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:14:08.711416 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:14:08.711425 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:14:08.711434 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:14:08.711443 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:14:08.711452 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:14:08.711499 | orchestrator | 2026-03-09 01:14:08.711513 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:14:08.711522 | orchestrator | Monday 09 March 2026 01:07:33 +0000 (0:00:00.790) 0:00:01.232 ********** 2026-03-09 01:14:08.711531 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-03-09 01:14:08.711541 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-03-09 01:14:08.711549 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-03-09 01:14:08.711558 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-03-09 01:14:08.711567 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-03-09 01:14:08.711575 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-03-09 01:14:08.711584 | orchestrator | 2026-03-09 01:14:08.711593 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-03-09 01:14:08.711601 | orchestrator | 2026-03-09 01:14:08.711664 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-09 01:14:08.711674 | orchestrator | Monday 09 March 2026 01:07:34 +0000 (0:00:00.778) 0:00:02.010 ********** 2026-03-09 01:14:08.711684 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:14:08.711695 | orchestrator | 2026-03-09 01:14:08.712338 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-03-09 01:14:08.712357 | orchestrator | Monday 09 March 2026 01:07:36 +0000 (0:00:02.388) 0:00:04.399 ********** 2026-03-09 01:14:08.712367 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:14:08.712376 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:14:08.712385 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:14:08.712394 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:14:08.712403 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:14:08.712412 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:14:08.712421 | orchestrator | 2026-03-09 01:14:08.712430 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-03-09 01:14:08.712439 | orchestrator | Monday 09 March 2026 01:07:38 +0000 (0:00:02.077) 0:00:06.477 ********** 2026-03-09 01:14:08.712448 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:14:08.712456 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:14:08.712523 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:14:08.712550 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:14:08.712559 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:14:08.712567 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:14:08.712579 | orchestrator | 2026-03-09 01:14:08.712595 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-03-09 01:14:08.712614 | orchestrator | Monday 09 March 2026 01:07:39 +0000 (0:00:01.179) 0:00:07.656 ********** 2026-03-09 01:14:08.712635 | orchestrator | ok: [testbed-node-0] => { 2026-03-09 01:14:08.712652 | orchestrator |  "changed": false, 2026-03-09 01:14:08.712667 | orchestrator |  "msg": "All assertions passed" 2026-03-09 01:14:08.712682 | orchestrator | } 2026-03-09 01:14:08.712695 | orchestrator | ok: [testbed-node-1] => { 2026-03-09 01:14:08.712708 | orchestrator |  "changed": false, 2026-03-09 01:14:08.712725 | orchestrator |  "msg": "All assertions passed" 2026-03-09 01:14:08.712740 | orchestrator | } 2026-03-09 01:14:08.712756 | orchestrator | ok: [testbed-node-2] => { 2026-03-09 01:14:08.712772 | orchestrator |  "changed": false, 2026-03-09 01:14:08.712828 | orchestrator |  "msg": "All assertions passed" 2026-03-09 01:14:08.712838 | orchestrator | } 2026-03-09 01:14:08.712847 | orchestrator | ok: [testbed-node-3] => { 2026-03-09 01:14:08.712855 | orchestrator |  "changed": false, 2026-03-09 01:14:08.712864 | orchestrator |  "msg": "All assertions passed" 2026-03-09 01:14:08.712873 | orchestrator | } 2026-03-09 01:14:08.712885 | orchestrator | ok: [testbed-node-4] => { 2026-03-09 01:14:08.712900 | orchestrator |  "changed": false, 2026-03-09 01:14:08.712914 | orchestrator |  "msg": "All assertions passed" 2026-03-09 01:14:08.712928 | orchestrator | } 2026-03-09 01:14:08.712942 | orchestrator | ok: [testbed-node-5] => { 2026-03-09 01:14:08.712956 | orchestrator |  "changed": false, 2026-03-09 01:14:08.712970 | orchestrator |  "msg": "All assertions passed" 2026-03-09 01:14:08.712986 | orchestrator | } 2026-03-09 01:14:08.713001 | orchestrator | 2026-03-09 01:14:08.713016 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-03-09 01:14:08.713031 | orchestrator | Monday 09 March 2026 01:07:40 +0000 (0:00:00.956) 0:00:08.613 ********** 2026-03-09 01:14:08.713043 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:14:08.713052 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:14:08.713060 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:14:08.713069 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:14:08.713096 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:14:08.713105 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:14:08.713113 | orchestrator | 2026-03-09 01:14:08.713122 | orchestrator | TASK [service-ks-register : neutron | Creating/deleting services] ************** 2026-03-09 01:14:08.713131 | orchestrator | Monday 09 March 2026 01:07:41 +0000 (0:00:00.662) 0:00:09.275 ********** 2026-03-09 01:14:08.713139 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-03-09 01:14:08.713148 | orchestrator | 2026-03-09 01:14:08.713157 | orchestrator | TASK [service-ks-register : neutron | Creating/deleting endpoints] ************* 2026-03-09 01:14:08.713166 | orchestrator | Monday 09 March 2026 01:07:45 +0000 (0:00:04.279) 0:00:13.555 ********** 2026-03-09 01:14:08.713175 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-03-09 01:14:08.713185 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-03-09 01:14:08.713194 | orchestrator | 2026-03-09 01:14:08.713251 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-03-09 01:14:08.713262 | orchestrator | Monday 09 March 2026 01:07:53 +0000 (0:00:07.784) 0:00:21.339 ********** 2026-03-09 01:14:08.713271 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-09 01:14:08.713279 | orchestrator | 2026-03-09 01:14:08.713288 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-03-09 01:14:08.713297 | orchestrator | Monday 09 March 2026 01:07:57 +0000 (0:00:03.612) 0:00:24.952 ********** 2026-03-09 01:14:08.713305 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-03-09 01:14:08.713325 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-09 01:14:08.713334 | orchestrator | 2026-03-09 01:14:08.713342 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-03-09 01:14:08.713351 | orchestrator | Monday 09 March 2026 01:08:01 +0000 (0:00:04.360) 0:00:29.312 ********** 2026-03-09 01:14:08.713360 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-09 01:14:08.713368 | orchestrator | 2026-03-09 01:14:08.713377 | orchestrator | TASK [service-ks-register : neutron | Granting/revoking user roles] ************ 2026-03-09 01:14:08.713386 | orchestrator | Monday 09 March 2026 01:08:05 +0000 (0:00:04.048) 0:00:33.360 ********** 2026-03-09 01:14:08.713394 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-03-09 01:14:08.713403 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-03-09 01:14:08.713412 | orchestrator | 2026-03-09 01:14:08.713420 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-09 01:14:08.713429 | orchestrator | Monday 09 March 2026 01:08:12 +0000 (0:00:07.400) 0:00:40.761 ********** 2026-03-09 01:14:08.713437 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:14:08.713446 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:14:08.713454 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:14:08.713484 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:14:08.713494 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:14:08.713502 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:14:08.713511 | orchestrator | 2026-03-09 01:14:08.713519 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-03-09 01:14:08.713528 | orchestrator | Monday 09 March 2026 01:08:13 +0000 (0:00:00.870) 0:00:41.631 ********** 2026-03-09 01:14:08.713536 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:14:08.713545 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:14:08.713554 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:14:08.713562 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:14:08.713571 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:14:08.713580 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:14:08.713588 | orchestrator | 2026-03-09 01:14:08.713597 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-03-09 01:14:08.713606 | orchestrator | Monday 09 March 2026 01:08:15 +0000 (0:00:02.121) 0:00:43.752 ********** 2026-03-09 01:14:08.713614 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:14:08.713705 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:14:08.713714 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:14:08.713775 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:14:08.713787 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:14:08.713795 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:14:08.713804 | orchestrator | 2026-03-09 01:14:08.713813 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-09 01:14:08.713821 | orchestrator | Monday 09 March 2026 01:08:17 +0000 (0:00:01.293) 0:00:45.046 ********** 2026-03-09 01:14:08.713830 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:14:08.713839 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:14:08.713848 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:14:08.713856 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:14:08.713865 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:14:08.713873 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:14:08.713882 | orchestrator | 2026-03-09 01:14:08.713891 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-03-09 01:14:08.713900 | orchestrator | Monday 09 March 2026 01:08:19 +0000 (0:00:02.431) 0:00:47.478 ********** 2026-03-09 01:14:08.713918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:14:08.713970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:14:08.713982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:14:08.713992 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:14:08.714003 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:14:08.714067 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:14:08.714081 | orchestrator | 2026-03-09 01:14:08.714090 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-03-09 01:14:08.714099 | orchestrator | Monday 09 March 2026 01:08:23 +0000 (0:00:03.654) 0:00:51.133 ********** 2026-03-09 01:14:08.714109 | orchestrator | [WARNING]: Skipped 2026-03-09 01:14:08.714118 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-03-09 01:14:08.714154 | orchestrator | due to this access issue: 2026-03-09 01:14:08.714164 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-03-09 01:14:08.714173 | orchestrator | a directory 2026-03-09 01:14:08.714182 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 01:14:08.714191 | orchestrator | 2026-03-09 01:14:08.714200 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-09 01:14:08.714208 | orchestrator | Monday 09 March 2026 01:08:24 +0000 (0:00:00.994) 0:00:52.127 ********** 2026-03-09 01:14:08.714217 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:14:08.714228 | orchestrator | 2026-03-09 01:14:08.714237 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-03-09 01:14:08.714245 | orchestrator | Monday 09 March 2026 01:08:25 +0000 (0:00:01.468) 0:00:53.596 ********** 2026-03-09 01:14:08.714254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:14:08.714265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:14:08.714285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:14:08.714320 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:14:08.714331 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:14:08.714340 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:14:08.714349 | orchestrator | 2026-03-09 01:14:08.714358 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-03-09 01:14:08.714367 | orchestrator | Monday 09 March 2026 01:08:29 +0000 (0:00:03.405) 0:00:57.001 ********** 2026-03-09 01:14:08.714376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:14:08.714391 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:14:08.714401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:14:08.714411 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:14:08.714444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:14:08.714454 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:14:08.714518 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:14:08.714529 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:14:08.714538 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:14:08.714554 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:14:08.714567 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:14:08.714576 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:14:08.714585 | orchestrator | 2026-03-09 01:14:08.714594 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-03-09 01:14:08.714602 | orchestrator | Monday 09 March 2026 01:08:32 +0000 (0:00:02.959) 0:00:59.961 ********** 2026-03-09 01:14:08.714639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:14:08.714650 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:14:08.714659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:14:08.714669 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:14:08.714684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:14:08.714693 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:14:08.714702 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:14:08.714716 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:14:08.714748 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:14:08.714759 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:14:08.714768 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:14:08.714777 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:14:08.714786 | orchestrator | 2026-03-09 01:14:08.714794 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-03-09 01:14:08.714803 | orchestrator | Monday 09 March 2026 01:08:35 +0000 (0:00:03.305) 0:01:03.267 ********** 2026-03-09 01:14:08.714812 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:14:08.714820 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:14:08.714829 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:14:08.714844 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:14:08.714852 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:14:08.714861 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:14:08.714869 | orchestrator | 2026-03-09 01:14:08.714878 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-03-09 01:14:08.714887 | orchestrator | Monday 09 March 2026 01:08:37 +0000 (0:00:01.997) 0:01:05.264 ********** 2026-03-09 01:14:08.714895 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:14:08.714904 | orchestrator | 2026-03-09 01:14:08.714913 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-03-09 01:14:08.714921 | orchestrator | Monday 09 March 2026 01:08:37 +0000 (0:00:00.145) 0:01:05.410 ********** 2026-03-09 01:14:08.714930 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:14:08.714939 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:14:08.714947 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:14:08.714956 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:14:08.714964 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:14:08.714973 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:14:08.714981 | orchestrator | 2026-03-09 01:14:08.714990 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-03-09 01:14:08.715003 | orchestrator | Monday 09 March 2026 01:08:38 +0000 (0:00:00.905) 0:01:06.315 ********** 2026-03-09 01:14:08.715018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:14:08.715033 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:14:08.715060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:14:08.715110 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:14:08.715127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:14:08.715151 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:14:08.715165 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:14:08.715180 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:14:08.715194 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:14:08.715210 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:14:08.715232 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:14:08.715248 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:14:08.715262 | orchestrator | 2026-03-09 01:14:08.715276 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-03-09 01:14:08.715285 | orchestrator | Monday 09 March 2026 01:08:41 +0000 (0:00:02.647) 0:01:08.963 ********** 2026-03-09 01:14:08.715301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:14:08.715318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:14:08.715328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:14:08.715338 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:14:08.715352 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:14:08.715369 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:14:08.715384 | orchestrator | 2026-03-09 01:14:08.715393 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-03-09 01:14:08.715402 | orchestrator | Monday 09 March 2026 01:08:45 +0000 (0:00:04.077) 0:01:13.041 ********** 2026-03-09 01:14:08.715411 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:14:08.715420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:14:08.715430 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:14:08.715451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:14:08.715509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:14:08.715528 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:14:08.715544 | orchestrator | 2026-03-09 01:14:08.715560 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-03-09 01:14:08.715570 | orchestrator | Monday 09 March 2026 01:08:51 +0000 (0:00:06.614) 0:01:19.656 ********** 2026-03-09 01:14:08.715579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:14:08.715588 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:14:08.715608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:14:08.715624 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:14:08.715634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:14:08.715643 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:14:08.715652 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:14:08.715661 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:14:08.715670 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:14:08.715679 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:14:08.715692 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:14:08.715706 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:14:08.715716 | orchestrator | 2026-03-09 01:14:08.715724 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-03-09 01:14:08.715733 | orchestrator | Monday 09 March 2026 01:08:55 +0000 (0:00:03.601) 0:01:23.257 ********** 2026-03-09 01:14:08.715742 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:14:08.715750 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:14:08.715759 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:14:08.715767 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:14:08.715776 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:14:08.715784 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:14:08.715793 | orchestrator | 2026-03-09 01:14:08.715802 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-03-09 01:14:08.715815 | orchestrator | Monday 09 March 2026 01:08:59 +0000 (0:00:04.239) 0:01:27.497 ********** 2026-03-09 01:14:08.715824 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:14:08.715833 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:14:08.715842 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:14:08.715851 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:14:08.715860 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:14:08.715869 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:14:08.715882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:14:08.715907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:14:08.715917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:14:08.715926 | orchestrator | 2026-03-09 01:14:08.715935 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-03-09 01:14:08.715944 | orchestrator | Monday 09 March 2026 01:09:05 +0000 (0:00:06.197) 0:01:33.695 ********** 2026-03-09 01:14:08.715953 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:14:08.715961 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:14:08.715970 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:14:08.715979 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:14:08.715987 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:14:08.715996 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:14:08.716004 | orchestrator | 2026-03-09 01:14:08.716013 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-03-09 01:14:08.716022 | orchestrator | Monday 09 March 2026 01:09:09 +0000 (0:00:03.343) 0:01:37.038 ********** 2026-03-09 01:14:08.716030 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:14:08.716039 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:14:08.716048 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:14:08.716056 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:14:08.716065 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:14:08.716073 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:14:08.716082 | orchestrator | 2026-03-09 01:14:08.716090 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-03-09 01:14:08.716099 | orchestrator | Monday 09 March 2026 01:09:14 +0000 (0:00:05.202) 0:01:42.241 ********** 2026-03-09 01:14:08.716114 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:14:08.716122 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:14:08.716131 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:14:08.716141 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:14:08.716156 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:14:08.716170 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:14:08.716183 | orchestrator | 2026-03-09 01:14:08.716197 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-03-09 01:14:08.716211 | orchestrator | Monday 09 March 2026 01:09:18 +0000 (0:00:04.356) 0:01:46.598 ********** 2026-03-09 01:14:08.716224 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:14:08.716238 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:14:08.716252 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:14:08.716265 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:14:08.716277 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:14:08.716291 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:14:08.716304 | orchestrator | 2026-03-09 01:14:08.716319 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-03-09 01:14:08.716333 | orchestrator | Monday 09 March 2026 01:09:23 +0000 (0:00:04.427) 0:01:51.026 ********** 2026-03-09 01:14:08.716347 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:14:08.716360 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:14:08.716374 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:14:08.716389 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:14:08.716401 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:14:08.716412 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:14:08.716425 | orchestrator | 2026-03-09 01:14:08.716438 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-03-09 01:14:08.716488 | orchestrator | Monday 09 March 2026 01:09:26 +0000 (0:00:03.506) 0:01:54.532 ********** 2026-03-09 01:14:08.716504 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-09 01:14:08.716516 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:14:08.716529 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-09 01:14:08.716543 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:14:08.716556 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-09 01:14:08.716570 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:14:08.716582 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-09 01:14:08.716595 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:14:08.716609 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-09 01:14:08.716622 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:14:08.716646 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-09 01:14:08.716660 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:14:08.716673 | orchestrator | 2026-03-09 01:14:08.716689 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-03-09 01:14:08.716702 | orchestrator | Monday 09 March 2026 01:09:29 +0000 (0:00:02.891) 0:01:57.423 ********** 2026-03-09 01:14:08.716719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:14:08.716747 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:14:08.716763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:14:08.716779 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:14:08.716793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:14:08.716815 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:14:08.716840 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:14:08.716856 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:14:08.716871 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:14:08.716895 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:14:08.716911 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:14:08.716927 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:14:08.716942 | orchestrator | 2026-03-09 01:14:08.716956 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-03-09 01:14:08.716971 | orchestrator | Monday 09 March 2026 01:09:33 +0000 (0:00:04.169) 0:02:01.593 ********** 2026-03-09 01:14:08.716986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:14:08.717001 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:14:08.717022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:14:08.717040 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:14:08.717064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:14:08.717090 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:14:08.717106 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:14:08.717121 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:14:08.717136 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:14:08.717152 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:14:08.717166 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:14:08.717181 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:14:08.717193 | orchestrator | 2026-03-09 01:14:08.717206 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-03-09 01:14:08.717226 | orchestrator | Monday 09 March 2026 01:09:36 +0000 (0:00:03.034) 0:02:04.628 ********** 2026-03-09 01:14:08.717240 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:14:08.717254 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:14:08.717268 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:14:08.717280 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:14:08.717294 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:14:08.717307 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:14:08.717319 | orchestrator | 2026-03-09 01:14:08.717332 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-03-09 01:14:08.717345 | orchestrator | Monday 09 March 2026 01:09:40 +0000 (0:00:03.480) 0:02:08.109 ********** 2026-03-09 01:14:08.717359 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:14:08.717373 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:14:08.717396 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:14:08.717407 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:14:08.717420 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:14:08.717433 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:14:08.717446 | orchestrator | 2026-03-09 01:14:08.717494 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-03-09 01:14:08.717508 | orchestrator | Monday 09 March 2026 01:09:45 +0000 (0:00:05.363) 0:02:13.472 ********** 2026-03-09 01:14:08.717520 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:14:08.717534 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:14:08.717547 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:14:08.717560 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:14:08.717574 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:14:08.717587 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:14:08.717600 | orchestrator | 2026-03-09 01:14:08.717614 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-03-09 01:14:08.717629 | orchestrator | Monday 09 March 2026 01:09:49 +0000 (0:00:04.149) 0:02:17.621 ********** 2026-03-09 01:14:08.717644 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:14:08.717657 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:14:08.717672 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:14:08.717685 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:14:08.717697 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:14:08.717710 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:14:08.717724 | orchestrator | 2026-03-09 01:14:08.717736 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-03-09 01:14:08.717750 | orchestrator | Monday 09 March 2026 01:09:53 +0000 (0:00:03.712) 0:02:21.333 ********** 2026-03-09 01:14:08.717763 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:14:08.717777 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:14:08.717790 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:14:08.717804 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:14:08.717817 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:14:08.717832 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:14:08.717846 | orchestrator | 2026-03-09 01:14:08.717860 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-03-09 01:14:08.717874 | orchestrator | Monday 09 March 2026 01:09:57 +0000 (0:00:03.920) 0:02:25.253 ********** 2026-03-09 01:14:08.717889 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:14:08.717903 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:14:08.717918 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:14:08.717932 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:14:08.717946 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:14:08.717960 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:14:08.717976 | orchestrator | 2026-03-09 01:14:08.717991 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-03-09 01:14:08.718005 | orchestrator | Monday 09 March 2026 01:10:00 +0000 (0:00:02.753) 0:02:28.007 ********** 2026-03-09 01:14:08.718082 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:14:08.718094 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:14:08.718103 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:14:08.718111 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:14:08.718120 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:14:08.718129 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:14:08.718137 | orchestrator | 2026-03-09 01:14:08.718146 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-03-09 01:14:08.718155 | orchestrator | Monday 09 March 2026 01:10:03 +0000 (0:00:03.357) 0:02:31.364 ********** 2026-03-09 01:14:08.718164 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:14:08.718172 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:14:08.718181 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:14:08.718190 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:14:08.718198 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:14:08.718216 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:14:08.718225 | orchestrator | 2026-03-09 01:14:08.718233 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-03-09 01:14:08.718242 | orchestrator | Monday 09 March 2026 01:10:06 +0000 (0:00:02.882) 0:02:34.247 ********** 2026-03-09 01:14:08.718251 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:14:08.718259 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:14:08.718268 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:14:08.718276 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:14:08.718284 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:14:08.718293 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:14:08.718301 | orchestrator | 2026-03-09 01:14:08.718310 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-03-09 01:14:08.718318 | orchestrator | Monday 09 March 2026 01:10:08 +0000 (0:00:02.468) 0:02:36.716 ********** 2026-03-09 01:14:08.718327 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-09 01:14:08.718337 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:14:08.718346 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-09 01:14:08.718355 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:14:08.718363 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-09 01:14:08.718372 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:14:08.718387 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-09 01:14:08.718396 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:14:08.718405 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-09 01:14:08.718413 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:14:08.718422 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-09 01:14:08.718431 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:14:08.718439 | orchestrator | 2026-03-09 01:14:08.718448 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-03-09 01:14:08.718457 | orchestrator | Monday 09 March 2026 01:10:11 +0000 (0:00:02.638) 0:02:39.354 ********** 2026-03-09 01:14:08.718773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:14:08.718793 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:14:08.718803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:14:08.718822 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:14:08.718831 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:14:08.718840 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:14:08.718856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:14:08.718866 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:14:08.718882 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:14:08.718892 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:14:08.718901 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:14:08.718916 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:14:08.718925 | orchestrator | 2026-03-09 01:14:08.718934 | orchestrator | TASK [service-check-containers : neutron | Check containers] ******************* 2026-03-09 01:14:08.718943 | orchestrator | Monday 09 March 2026 01:10:14 +0000 (0:00:02.950) 0:02:42.305 ********** 2026-03-09 01:14:08.718952 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:14:08.718961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:14:08.718982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:14:08.718992 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:14:08.719002 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:14:08.719017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:14:08.719027 | orchestrator | 2026-03-09 01:14:08.719035 | orchestrator | TASK [service-check-containers : neutron | Notify handlers to restart containers] *** 2026-03-09 01:14:08.719044 | orchestrator | Monday 09 March 2026 01:10:18 +0000 (0:00:03.546) 0:02:45.852 ********** 2026-03-09 01:14:08.719052 | orchestrator | changed: [testbed-node-0] => { 2026-03-09 01:14:08.719060 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:14:08.719068 | orchestrator | } 2026-03-09 01:14:08.719076 | orchestrator | changed: [testbed-node-1] => { 2026-03-09 01:14:08.719084 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:14:08.719092 | orchestrator | } 2026-03-09 01:14:08.719114 | orchestrator | changed: [testbed-node-2] => { 2026-03-09 01:14:08.719122 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:14:08.719130 | orchestrator | } 2026-03-09 01:14:08.719138 | orchestrator | changed: [testbed-node-3] => { 2026-03-09 01:14:08.719154 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:14:08.719162 | orchestrator | } 2026-03-09 01:14:08.719170 | orchestrator | changed: [testbed-node-4] => { 2026-03-09 01:14:08.719178 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:14:08.719186 | orchestrator | } 2026-03-09 01:14:08.719193 | orchestrator | changed: [testbed-node-5] => { 2026-03-09 01:14:08.719201 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:14:08.719209 | orchestrator | } 2026-03-09 01:14:08.719217 | orchestrator | 2026-03-09 01:14:08.719225 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-09 01:14:08.719253 | orchestrator | Monday 09 March 2026 01:10:19 +0000 (0:00:01.163) 0:02:47.015 ********** 2026-03-09 01:14:08.719268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:14:08.719282 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:14:08.719291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:14:08.719299 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:14:08.719307 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:14:08.719316 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:14:08.719324 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:14:08.719332 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:14:08.719344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:14:08.719358 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:14:08.719371 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:14:08.719380 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:14:08.719388 | orchestrator | 2026-03-09 01:14:08.719396 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-09 01:14:08.719404 | orchestrator | Monday 09 March 2026 01:10:24 +0000 (0:00:05.145) 0:02:52.161 ********** 2026-03-09 01:14:08.719412 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:14:08.719419 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:14:08.719427 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:14:08.719435 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:14:08.719443 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:14:08.719450 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:14:08.719458 | orchestrator | 2026-03-09 01:14:08.719483 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-03-09 01:14:08.719491 | orchestrator | Monday 09 March 2026 01:10:25 +0000 (0:00:01.000) 0:02:53.162 ********** 2026-03-09 01:14:08.719499 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:14:08.719507 | orchestrator | 2026-03-09 01:14:08.719515 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-03-09 01:14:08.719523 | orchestrator | Monday 09 March 2026 01:10:27 +0000 (0:00:02.443) 0:02:55.605 ********** 2026-03-09 01:14:08.719531 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:14:08.719539 | orchestrator | 2026-03-09 01:14:08.719547 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-03-09 01:14:08.719555 | orchestrator | Monday 09 March 2026 01:10:30 +0000 (0:00:02.632) 0:02:58.238 ********** 2026-03-09 01:14:08.719563 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:14:08.719571 | orchestrator | 2026-03-09 01:14:08.719579 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-09 01:14:08.719587 | orchestrator | Monday 09 March 2026 01:11:17 +0000 (0:00:46.950) 0:03:45.189 ********** 2026-03-09 01:14:08.719595 | orchestrator | 2026-03-09 01:14:08.719603 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-09 01:14:08.719611 | orchestrator | Monday 09 March 2026 01:11:17 +0000 (0:00:00.069) 0:03:45.258 ********** 2026-03-09 01:14:08.719618 | orchestrator | 2026-03-09 01:14:08.719626 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-09 01:14:08.719634 | orchestrator | Monday 09 March 2026 01:11:17 +0000 (0:00:00.099) 0:03:45.358 ********** 2026-03-09 01:14:08.719642 | orchestrator | 2026-03-09 01:14:08.719650 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-09 01:14:08.719658 | orchestrator | Monday 09 March 2026 01:11:17 +0000 (0:00:00.279) 0:03:45.638 ********** 2026-03-09 01:14:08.719666 | orchestrator | 2026-03-09 01:14:08.719674 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-09 01:14:08.719681 | orchestrator | Monday 09 March 2026 01:11:17 +0000 (0:00:00.067) 0:03:45.705 ********** 2026-03-09 01:14:08.719689 | orchestrator | 2026-03-09 01:14:08.719697 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-09 01:14:08.719705 | orchestrator | Monday 09 March 2026 01:11:17 +0000 (0:00:00.067) 0:03:45.772 ********** 2026-03-09 01:14:08.719718 | orchestrator | 2026-03-09 01:14:08.719726 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-03-09 01:14:08.719734 | orchestrator | Monday 09 March 2026 01:11:18 +0000 (0:00:00.070) 0:03:45.843 ********** 2026-03-09 01:14:08.719742 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:14:08.719750 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:14:08.719758 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:14:08.719769 | orchestrator | 2026-03-09 01:14:08.719783 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-03-09 01:14:08.719796 | orchestrator | Monday 09 March 2026 01:11:52 +0000 (0:00:34.131) 0:04:19.975 ********** 2026-03-09 01:14:08.719810 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:14:08.719823 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:14:08.719838 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:14:08.719852 | orchestrator | 2026-03-09 01:14:08.719860 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:14:08.719873 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-09 01:14:08.719883 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-09 01:14:08.719891 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-09 01:14:08.719899 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-09 01:14:08.719913 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-09 01:14:08.719921 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-09 01:14:08.719929 | orchestrator | 2026-03-09 01:14:08.719937 | orchestrator | 2026-03-09 01:14:08.719945 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:14:08.719953 | orchestrator | Monday 09 March 2026 01:12:47 +0000 (0:00:55.746) 0:05:15.721 ********** 2026-03-09 01:14:08.719960 | orchestrator | =============================================================================== 2026-03-09 01:14:08.719968 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 55.75s 2026-03-09 01:14:08.719976 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 46.95s 2026-03-09 01:14:08.719984 | orchestrator | neutron : Restart neutron-server container ----------------------------- 34.13s 2026-03-09 01:14:08.719992 | orchestrator | service-ks-register : neutron | Creating/deleting endpoints ------------- 7.78s 2026-03-09 01:14:08.720000 | orchestrator | service-ks-register : neutron | Granting/revoking user roles ------------ 7.40s 2026-03-09 01:14:08.720008 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.61s 2026-03-09 01:14:08.720015 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 6.20s 2026-03-09 01:14:08.720023 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 5.36s 2026-03-09 01:14:08.720031 | orchestrator | neutron : Copying over sriov_agent.ini ---------------------------------- 5.20s 2026-03-09 01:14:08.720039 | orchestrator | service-check-containers : Include tasks -------------------------------- 5.15s 2026-03-09 01:14:08.720047 | orchestrator | neutron : Copying over eswitchd.conf ------------------------------------ 4.43s 2026-03-09 01:14:08.720054 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.36s 2026-03-09 01:14:08.720062 | orchestrator | neutron : Copying over mlnx_agent.ini ----------------------------------- 4.36s 2026-03-09 01:14:08.720070 | orchestrator | service-ks-register : neutron | Creating/deleting services -------------- 4.28s 2026-03-09 01:14:08.720083 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 4.24s 2026-03-09 01:14:08.720091 | orchestrator | neutron : Copying over l3_agent.ini ------------------------------------- 4.17s 2026-03-09 01:14:08.720099 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 4.15s 2026-03-09 01:14:08.720107 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.08s 2026-03-09 01:14:08.720115 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 4.05s 2026-03-09 01:14:08.720123 | orchestrator | neutron : Copying over bgp_dragent.ini ---------------------------------- 3.92s 2026-03-09 01:14:08.720130 | orchestrator | 2026-03-09 01:14:08 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:14:08.720138 | orchestrator | 2026-03-09 01:14:08 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:14:08.720147 | orchestrator | 2026-03-09 01:14:08 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:11.758135 | orchestrator | 2026-03-09 01:14:11 | INFO  | Task efd60857-1a0f-492c-a61c-02e3033d1b89 is in state STARTED 2026-03-09 01:14:11.761001 | orchestrator | 2026-03-09 01:14:11 | INFO  | Task bf1507b1-7a66-44b9-a9af-2037365512c2 is in state STARTED 2026-03-09 01:14:11.763894 | orchestrator | 2026-03-09 01:14:11 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:14:11.765215 | orchestrator | 2026-03-09 01:14:11 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:14:11.765300 | orchestrator | 2026-03-09 01:14:11 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:14.814775 | orchestrator | 2026-03-09 01:14:14 | INFO  | Task efd60857-1a0f-492c-a61c-02e3033d1b89 is in state STARTED 2026-03-09 01:14:14.816962 | orchestrator | 2026-03-09 01:14:14 | INFO  | Task bf1507b1-7a66-44b9-a9af-2037365512c2 is in state STARTED 2026-03-09 01:14:14.819287 | orchestrator | 2026-03-09 01:14:14 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:14:14.820839 | orchestrator | 2026-03-09 01:14:14 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:14:14.820888 | orchestrator | 2026-03-09 01:14:14 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:17.867072 | orchestrator | 2026-03-09 01:14:17 | INFO  | Task efd60857-1a0f-492c-a61c-02e3033d1b89 is in state STARTED 2026-03-09 01:14:17.868785 | orchestrator | 2026-03-09 01:14:17 | INFO  | Task bf1507b1-7a66-44b9-a9af-2037365512c2 is in state STARTED 2026-03-09 01:14:17.872979 | orchestrator | 2026-03-09 01:14:17 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:14:17.874777 | orchestrator | 2026-03-09 01:14:17 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:14:17.874828 | orchestrator | 2026-03-09 01:14:17 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:20.932771 | orchestrator | 2026-03-09 01:14:20 | INFO  | Task efd60857-1a0f-492c-a61c-02e3033d1b89 is in state STARTED 2026-03-09 01:14:20.932860 | orchestrator | 2026-03-09 01:14:20 | INFO  | Task bf1507b1-7a66-44b9-a9af-2037365512c2 is in state STARTED 2026-03-09 01:14:20.935025 | orchestrator | 2026-03-09 01:14:20 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:14:20.937587 | orchestrator | 2026-03-09 01:14:20 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:14:20.937632 | orchestrator | 2026-03-09 01:14:20 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:24.019443 | orchestrator | 2026-03-09 01:14:24 | INFO  | Task efd60857-1a0f-492c-a61c-02e3033d1b89 is in state STARTED 2026-03-09 01:14:24.023038 | orchestrator | 2026-03-09 01:14:24 | INFO  | Task bf1507b1-7a66-44b9-a9af-2037365512c2 is in state STARTED 2026-03-09 01:14:24.024930 | orchestrator | 2026-03-09 01:14:24 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:14:24.027273 | orchestrator | 2026-03-09 01:14:24 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:14:24.028470 | orchestrator | 2026-03-09 01:14:24 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:27.082677 | orchestrator | 2026-03-09 01:14:27 | INFO  | Task efd60857-1a0f-492c-a61c-02e3033d1b89 is in state STARTED 2026-03-09 01:14:27.082906 | orchestrator | 2026-03-09 01:14:27 | INFO  | Task bf1507b1-7a66-44b9-a9af-2037365512c2 is in state STARTED 2026-03-09 01:14:27.084647 | orchestrator | 2026-03-09 01:14:27 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:14:27.085859 | orchestrator | 2026-03-09 01:14:27 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:14:27.085913 | orchestrator | 2026-03-09 01:14:27 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:30.125652 | orchestrator | 2026-03-09 01:14:30 | INFO  | Task efd60857-1a0f-492c-a61c-02e3033d1b89 is in state STARTED 2026-03-09 01:14:30.128652 | orchestrator | 2026-03-09 01:14:30 | INFO  | Task bf1507b1-7a66-44b9-a9af-2037365512c2 is in state STARTED 2026-03-09 01:14:30.130958 | orchestrator | 2026-03-09 01:14:30 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:14:30.133419 | orchestrator | 2026-03-09 01:14:30 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:14:30.133495 | orchestrator | 2026-03-09 01:14:30 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:33.182760 | orchestrator | 2026-03-09 01:14:33 | INFO  | Task efd60857-1a0f-492c-a61c-02e3033d1b89 is in state STARTED 2026-03-09 01:14:33.183152 | orchestrator | 2026-03-09 01:14:33 | INFO  | Task bf1507b1-7a66-44b9-a9af-2037365512c2 is in state STARTED 2026-03-09 01:14:33.183724 | orchestrator | 2026-03-09 01:14:33 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:14:33.184485 | orchestrator | 2026-03-09 01:14:33 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:14:33.184499 | orchestrator | 2026-03-09 01:14:33 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:36.235165 | orchestrator | 2026-03-09 01:14:36 | INFO  | Task efd60857-1a0f-492c-a61c-02e3033d1b89 is in state STARTED 2026-03-09 01:14:36.237946 | orchestrator | 2026-03-09 01:14:36 | INFO  | Task bf1507b1-7a66-44b9-a9af-2037365512c2 is in state STARTED 2026-03-09 01:14:36.239869 | orchestrator | 2026-03-09 01:14:36 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:14:36.242515 | orchestrator | 2026-03-09 01:14:36 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:14:36.242578 | orchestrator | 2026-03-09 01:14:36 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:39.310917 | orchestrator | 2026-03-09 01:14:39 | INFO  | Task efd60857-1a0f-492c-a61c-02e3033d1b89 is in state STARTED 2026-03-09 01:14:39.312275 | orchestrator | 2026-03-09 01:14:39 | INFO  | Task bf1507b1-7a66-44b9-a9af-2037365512c2 is in state STARTED 2026-03-09 01:14:39.313398 | orchestrator | 2026-03-09 01:14:39 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:14:39.315552 | orchestrator | 2026-03-09 01:14:39 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:14:39.315591 | orchestrator | 2026-03-09 01:14:39 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:42.363898 | orchestrator | 2026-03-09 01:14:42 | INFO  | Task efd60857-1a0f-492c-a61c-02e3033d1b89 is in state STARTED 2026-03-09 01:14:42.366256 | orchestrator | 2026-03-09 01:14:42 | INFO  | Task bf1507b1-7a66-44b9-a9af-2037365512c2 is in state STARTED 2026-03-09 01:14:42.369626 | orchestrator | 2026-03-09 01:14:42 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:14:42.371427 | orchestrator | 2026-03-09 01:14:42 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:14:42.371507 | orchestrator | 2026-03-09 01:14:42 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:45.426994 | orchestrator | 2026-03-09 01:14:45 | INFO  | Task efd60857-1a0f-492c-a61c-02e3033d1b89 is in state STARTED 2026-03-09 01:14:45.430727 | orchestrator | 2026-03-09 01:14:45 | INFO  | Task bf1507b1-7a66-44b9-a9af-2037365512c2 is in state STARTED 2026-03-09 01:14:45.430814 | orchestrator | 2026-03-09 01:14:45 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:14:45.433750 | orchestrator | 2026-03-09 01:14:45 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:14:45.433836 | orchestrator | 2026-03-09 01:14:45 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:48.474780 | orchestrator | 2026-03-09 01:14:48 | INFO  | Task efd60857-1a0f-492c-a61c-02e3033d1b89 is in state STARTED 2026-03-09 01:14:48.474956 | orchestrator | 2026-03-09 01:14:48 | INFO  | Task bf1507b1-7a66-44b9-a9af-2037365512c2 is in state STARTED 2026-03-09 01:14:48.475846 | orchestrator | 2026-03-09 01:14:48 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:14:48.477373 | orchestrator | 2026-03-09 01:14:48 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:14:48.477474 | orchestrator | 2026-03-09 01:14:48 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:51.556343 | orchestrator | 2026-03-09 01:14:51 | INFO  | Task efd60857-1a0f-492c-a61c-02e3033d1b89 is in state STARTED 2026-03-09 01:14:51.558328 | orchestrator | 2026-03-09 01:14:51 | INFO  | Task bf1507b1-7a66-44b9-a9af-2037365512c2 is in state STARTED 2026-03-09 01:14:51.558971 | orchestrator | 2026-03-09 01:14:51 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:14:51.560263 | orchestrator | 2026-03-09 01:14:51 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:14:51.560284 | orchestrator | 2026-03-09 01:14:51 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:54.623184 | orchestrator | 2026-03-09 01:14:54 | INFO  | Task efd60857-1a0f-492c-a61c-02e3033d1b89 is in state STARTED 2026-03-09 01:14:54.625346 | orchestrator | 2026-03-09 01:14:54 | INFO  | Task bf1507b1-7a66-44b9-a9af-2037365512c2 is in state STARTED 2026-03-09 01:14:54.627964 | orchestrator | 2026-03-09 01:14:54 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:14:54.629784 | orchestrator | 2026-03-09 01:14:54 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:14:54.629848 | orchestrator | 2026-03-09 01:14:54 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:57.676950 | orchestrator | 2026-03-09 01:14:57 | INFO  | Task efd60857-1a0f-492c-a61c-02e3033d1b89 is in state STARTED 2026-03-09 01:14:57.678908 | orchestrator | 2026-03-09 01:14:57 | INFO  | Task bf1507b1-7a66-44b9-a9af-2037365512c2 is in state STARTED 2026-03-09 01:14:57.680792 | orchestrator | 2026-03-09 01:14:57 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:14:57.683239 | orchestrator | 2026-03-09 01:14:57 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:14:57.683295 | orchestrator | 2026-03-09 01:14:57 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:15:00.746087 | orchestrator | 2026-03-09 01:15:00 | INFO  | Task efd60857-1a0f-492c-a61c-02e3033d1b89 is in state STARTED 2026-03-09 01:15:00.749187 | orchestrator | 2026-03-09 01:15:00 | INFO  | Task bf1507b1-7a66-44b9-a9af-2037365512c2 is in state SUCCESS 2026-03-09 01:15:00.750155 | orchestrator | 2026-03-09 01:15:00.750226 | orchestrator | 2026-03-09 01:15:00.750248 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:15:00.750261 | orchestrator | 2026-03-09 01:15:00.750272 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:15:00.750284 | orchestrator | Monday 09 March 2026 01:13:41 +0000 (0:00:00.335) 0:00:00.335 ********** 2026-03-09 01:15:00.750295 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:15:00.750307 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:15:00.750318 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:15:00.750328 | orchestrator | 2026-03-09 01:15:00.750339 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:15:00.750350 | orchestrator | Monday 09 March 2026 01:13:41 +0000 (0:00:00.403) 0:00:00.739 ********** 2026-03-09 01:15:00.750361 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-03-09 01:15:00.750372 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-03-09 01:15:00.750383 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-03-09 01:15:00.750394 | orchestrator | 2026-03-09 01:15:00.750405 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-03-09 01:15:00.750416 | orchestrator | 2026-03-09 01:15:00.750427 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-09 01:15:00.750476 | orchestrator | Monday 09 March 2026 01:13:42 +0000 (0:00:00.817) 0:00:01.557 ********** 2026-03-09 01:15:00.750491 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:15:00.750503 | orchestrator | 2026-03-09 01:15:00.750514 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-03-09 01:15:00.750525 | orchestrator | Monday 09 March 2026 01:13:43 +0000 (0:00:00.965) 0:00:02.522 ********** 2026-03-09 01:15:00.750539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:15:00.750554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:15:00.750597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:15:00.750609 | orchestrator | 2026-03-09 01:15:00.750620 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-03-09 01:15:00.750631 | orchestrator | Monday 09 March 2026 01:13:44 +0000 (0:00:01.237) 0:00:03.760 ********** 2026-03-09 01:15:00.750642 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 01:15:00.750653 | orchestrator | 2026-03-09 01:15:00.750664 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-09 01:15:00.750690 | orchestrator | Monday 09 March 2026 01:13:45 +0000 (0:00:01.164) 0:00:04.925 ********** 2026-03-09 01:15:00.750704 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:15:00.750738 | orchestrator | 2026-03-09 01:15:00.750757 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-03-09 01:15:00.750794 | orchestrator | Monday 09 March 2026 01:13:46 +0000 (0:00:00.947) 0:00:05.873 ********** 2026-03-09 01:15:00.750829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:15:00.750852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:15:00.750867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:15:00.750888 | orchestrator | 2026-03-09 01:15:00.750900 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-03-09 01:15:00.750911 | orchestrator | Monday 09 March 2026 01:13:48 +0000 (0:00:01.915) 0:00:07.789 ********** 2026-03-09 01:15:00.750922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:15:00.750934 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:15:00.750958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:15:00.750979 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:15:00.751010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:15:00.751031 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:15:00.751046 | orchestrator | 2026-03-09 01:15:00.751056 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-03-09 01:15:00.751067 | orchestrator | Monday 09 March 2026 01:13:49 +0000 (0:00:00.480) 0:00:08.269 ********** 2026-03-09 01:15:00.751078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:15:00.751090 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:15:00.751101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:15:00.751121 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:15:00.751132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:15:00.751144 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:15:00.751154 | orchestrator | 2026-03-09 01:15:00.751165 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-03-09 01:15:00.751176 | orchestrator | Monday 09 March 2026 01:13:50 +0000 (0:00:01.050) 0:00:09.320 ********** 2026-03-09 01:15:00.751200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:15:00.751213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:15:00.751224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:15:00.751242 | orchestrator | 2026-03-09 01:15:00.751253 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-03-09 01:15:00.751267 | orchestrator | Monday 09 March 2026 01:13:51 +0000 (0:00:01.510) 0:00:10.830 ********** 2026-03-09 01:15:00.751286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:15:00.751306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:15:00.751331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:15:00.751350 | orchestrator | 2026-03-09 01:15:00.751367 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-03-09 01:15:00.751392 | orchestrator | Monday 09 March 2026 01:13:53 +0000 (0:00:01.720) 0:00:12.551 ********** 2026-03-09 01:15:00.751411 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:15:00.751428 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:15:00.751487 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:15:00.751504 | orchestrator | 2026-03-09 01:15:00.751522 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-03-09 01:15:00.751540 | orchestrator | Monday 09 March 2026 01:13:54 +0000 (0:00:00.647) 0:00:13.199 ********** 2026-03-09 01:15:00.751559 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-09 01:15:00.751578 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-09 01:15:00.751596 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-09 01:15:00.751614 | orchestrator | 2026-03-09 01:15:00.751631 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-03-09 01:15:00.751649 | orchestrator | Monday 09 March 2026 01:13:55 +0000 (0:00:01.756) 0:00:14.955 ********** 2026-03-09 01:15:00.751668 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-09 01:15:00.751703 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-09 01:15:00.751723 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-09 01:15:00.751741 | orchestrator | 2026-03-09 01:15:00.751759 | orchestrator | TASK [grafana : Check if the folder for custom grafana dashboards exists] ****** 2026-03-09 01:15:00.751779 | orchestrator | Monday 09 March 2026 01:13:57 +0000 (0:00:01.598) 0:00:16.554 ********** 2026-03-09 01:15:00.751795 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 01:15:00.751806 | orchestrator | 2026-03-09 01:15:00.751817 | orchestrator | TASK [grafana : Remove templated Grafana dashboards] *************************** 2026-03-09 01:15:00.751828 | orchestrator | Monday 09 March 2026 01:13:58 +0000 (0:00:01.071) 0:00:17.626 ********** 2026-03-09 01:15:00.751844 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:15:00.751863 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:15:00.751880 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:15:00.751897 | orchestrator | 2026-03-09 01:15:00.751915 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-03-09 01:15:00.751932 | orchestrator | Monday 09 March 2026 01:13:59 +0000 (0:00:01.155) 0:00:18.781 ********** 2026-03-09 01:15:00.751950 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:15:00.751969 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:15:00.751987 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:15:00.752003 | orchestrator | 2026-03-09 01:15:00.752022 | orchestrator | TASK [service-check-containers : grafana | Check containers] ******************* 2026-03-09 01:15:00.752041 | orchestrator | Monday 09 March 2026 01:14:02 +0000 (0:00:02.713) 0:00:21.494 ********** 2026-03-09 01:15:00.752061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:15:00.752081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:15:00.752129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:15:00.752164 | orchestrator | 2026-03-09 01:15:00.752183 | orchestrator | TASK [service-check-containers : grafana | Notify handlers to restart containers] *** 2026-03-09 01:15:00.752201 | orchestrator | Monday 09 March 2026 01:14:04 +0000 (0:00:01.713) 0:00:23.208 ********** 2026-03-09 01:15:00.752217 | orchestrator | changed: [testbed-node-0] => { 2026-03-09 01:15:00.752234 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:15:00.752251 | orchestrator | } 2026-03-09 01:15:00.752269 | orchestrator | changed: [testbed-node-1] => { 2026-03-09 01:15:00.752286 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:15:00.752303 | orchestrator | } 2026-03-09 01:15:00.752321 | orchestrator | changed: [testbed-node-2] => { 2026-03-09 01:15:00.752339 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:15:00.752358 | orchestrator | } 2026-03-09 01:15:00.752376 | orchestrator | 2026-03-09 01:15:00.752394 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-09 01:15:00.752412 | orchestrator | Monday 09 March 2026 01:14:04 +0000 (0:00:00.434) 0:00:23.642 ********** 2026-03-09 01:15:00.752460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:15:00.752482 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:15:00.752502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:15:00.752521 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:15:00.752541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:15:00.752557 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:15:00.752568 | orchestrator | 2026-03-09 01:15:00.752579 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-03-09 01:15:00.752590 | orchestrator | Monday 09 March 2026 01:14:05 +0000 (0:00:01.105) 0:00:24.748 ********** 2026-03-09 01:15:00.752601 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:15:00.752794 | orchestrator | 2026-03-09 01:15:00.752880 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-03-09 01:15:00.752891 | orchestrator | Monday 09 March 2026 01:14:07 +0000 (0:00:02.156) 0:00:26.904 ********** 2026-03-09 01:15:00.752917 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:15:00.752928 | orchestrator | 2026-03-09 01:15:00.752939 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-09 01:15:00.752967 | orchestrator | Monday 09 March 2026 01:14:10 +0000 (0:00:02.344) 0:00:29.249 ********** 2026-03-09 01:15:00.752986 | orchestrator | 2026-03-09 01:15:00.753003 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-09 01:15:00.753022 | orchestrator | Monday 09 March 2026 01:14:10 +0000 (0:00:00.076) 0:00:29.325 ********** 2026-03-09 01:15:00.753041 | orchestrator | 2026-03-09 01:15:00.753060 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-09 01:15:00.753089 | orchestrator | Monday 09 March 2026 01:14:10 +0000 (0:00:00.064) 0:00:29.390 ********** 2026-03-09 01:15:00.753100 | orchestrator | 2026-03-09 01:15:00.753111 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-03-09 01:15:00.753122 | orchestrator | Monday 09 March 2026 01:14:10 +0000 (0:00:00.075) 0:00:29.466 ********** 2026-03-09 01:15:00.753133 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:15:00.753144 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:15:00.753155 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:15:00.753166 | orchestrator | 2026-03-09 01:15:00.753177 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-03-09 01:15:00.753188 | orchestrator | Monday 09 March 2026 01:14:12 +0000 (0:00:01.767) 0:00:31.234 ********** 2026-03-09 01:15:00.753199 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:15:00.753210 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:15:00.753221 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-03-09 01:15:00.753233 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:15:00.753244 | orchestrator | 2026-03-09 01:15:00.753255 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-03-09 01:15:00.753266 | orchestrator | Monday 09 March 2026 01:14:27 +0000 (0:00:15.194) 0:00:46.429 ********** 2026-03-09 01:15:00.753277 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:15:00.753288 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:15:00.753299 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:15:00.753310 | orchestrator | 2026-03-09 01:15:00.753321 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-03-09 01:15:00.753332 | orchestrator | Monday 09 March 2026 01:14:52 +0000 (0:00:24.833) 0:01:11.263 ********** 2026-03-09 01:15:00.753343 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:15:00.753354 | orchestrator | 2026-03-09 01:15:00.753365 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-03-09 01:15:00.753376 | orchestrator | Monday 09 March 2026 01:14:54 +0000 (0:00:02.698) 0:01:13.962 ********** 2026-03-09 01:15:00.753386 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:15:00.753397 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:15:00.753411 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:15:00.753424 | orchestrator | 2026-03-09 01:15:00.753469 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-03-09 01:15:00.753487 | orchestrator | Monday 09 March 2026 01:14:55 +0000 (0:00:00.446) 0:01:14.409 ********** 2026-03-09 01:15:00.753502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-03-09 01:15:00.753519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-03-09 01:15:00.753546 | orchestrator | 2026-03-09 01:15:00.753560 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-03-09 01:15:00.753573 | orchestrator | Monday 09 March 2026 01:14:58 +0000 (0:00:02.637) 0:01:17.046 ********** 2026-03-09 01:15:00.753586 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:15:00.753599 | orchestrator | 2026-03-09 01:15:00.753612 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:15:00.753626 | orchestrator | testbed-node-0 : ok=22  changed=13  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 01:15:00.753640 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 01:15:00.753654 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 01:15:00.753666 | orchestrator | 2026-03-09 01:15:00.753679 | orchestrator | 2026-03-09 01:15:00.753692 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:15:00.753705 | orchestrator | Monday 09 March 2026 01:14:58 +0000 (0:00:00.279) 0:01:17.325 ********** 2026-03-09 01:15:00.753718 | orchestrator | =============================================================================== 2026-03-09 01:15:00.753731 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 24.84s 2026-03-09 01:15:00.753744 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 15.19s 2026-03-09 01:15:00.753757 | orchestrator | grafana : Copying over custom dashboards -------------------------------- 2.71s 2026-03-09 01:15:00.753770 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.70s 2026-03-09 01:15:00.753782 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.64s 2026-03-09 01:15:00.753795 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.34s 2026-03-09 01:15:00.753821 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.16s 2026-03-09 01:15:00.753833 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.92s 2026-03-09 01:15:00.753844 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.77s 2026-03-09 01:15:00.753855 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.76s 2026-03-09 01:15:00.753876 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.72s 2026-03-09 01:15:00.753894 | orchestrator | service-check-containers : grafana | Check containers ------------------- 1.71s 2026-03-09 01:15:00.753912 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.60s 2026-03-09 01:15:00.753929 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.51s 2026-03-09 01:15:00.753947 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.24s 2026-03-09 01:15:00.753965 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 1.16s 2026-03-09 01:15:00.753984 | orchestrator | grafana : Remove templated Grafana dashboards --------------------------- 1.16s 2026-03-09 01:15:00.754002 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.11s 2026-03-09 01:15:00.754086 | orchestrator | grafana : Check if the folder for custom grafana dashboards exists ------ 1.07s 2026-03-09 01:15:00.754113 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 1.05s 2026-03-09 01:15:00.754133 | orchestrator | 2026-03-09 01:15:00 | INFO  | Task 88be52dc-de1a-4bc3-8e5c-889d50c056cf is in state STARTED 2026-03-09 01:15:00.755690 | orchestrator | 2026-03-09 01:15:00 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:15:00.758976 | orchestrator | 2026-03-09 01:15:00 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:15:00.759034 | orchestrator | 2026-03-09 01:15:00 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:15:03.800049 | orchestrator | 2026-03-09 01:15:03 | INFO  | Task efd60857-1a0f-492c-a61c-02e3033d1b89 is in state STARTED 2026-03-09 01:15:03.801758 | orchestrator | 2026-03-09 01:15:03 | INFO  | Task 88be52dc-de1a-4bc3-8e5c-889d50c056cf is in state STARTED 2026-03-09 01:15:03.803633 | orchestrator | 2026-03-09 01:15:03 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:15:03.806153 | orchestrator | 2026-03-09 01:15:03 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:15:03.806211 | orchestrator | 2026-03-09 01:15:03 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:15:06.854010 | orchestrator | 2026-03-09 01:15:06 | INFO  | Task efd60857-1a0f-492c-a61c-02e3033d1b89 is in state SUCCESS 2026-03-09 01:15:06.856534 | orchestrator | 2026-03-09 01:15:06.856594 | orchestrator | 2026-03-09 01:15:06.856607 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:15:06.856618 | orchestrator | 2026-03-09 01:15:06.856629 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:15:06.856949 | orchestrator | Monday 09 March 2026 01:12:57 +0000 (0:00:00.355) 0:00:00.355 ********** 2026-03-09 01:15:06.856972 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:15:06.856990 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:15:06.857007 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:15:06.857024 | orchestrator | 2026-03-09 01:15:06.857042 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:15:06.857061 | orchestrator | Monday 09 March 2026 01:12:58 +0000 (0:00:00.451) 0:00:00.807 ********** 2026-03-09 01:15:06.857079 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-03-09 01:15:06.857098 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-03-09 01:15:06.857114 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-03-09 01:15:06.857132 | orchestrator | 2026-03-09 01:15:06.857147 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-03-09 01:15:06.857158 | orchestrator | 2026-03-09 01:15:06.857168 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-09 01:15:06.857178 | orchestrator | Monday 09 March 2026 01:12:59 +0000 (0:00:01.049) 0:00:01.857 ********** 2026-03-09 01:15:06.857188 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:15:06.857199 | orchestrator | 2026-03-09 01:15:06.857209 | orchestrator | TASK [service-ks-register : magnum | Creating/deleting services] *************** 2026-03-09 01:15:06.857219 | orchestrator | Monday 09 March 2026 01:13:00 +0000 (0:00:01.188) 0:00:03.049 ********** 2026-03-09 01:15:06.857230 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-03-09 01:15:06.857240 | orchestrator | 2026-03-09 01:15:06.857249 | orchestrator | TASK [service-ks-register : magnum | Creating/deleting endpoints] ************** 2026-03-09 01:15:06.857259 | orchestrator | Monday 09 March 2026 01:13:05 +0000 (0:00:04.917) 0:00:07.966 ********** 2026-03-09 01:15:06.857269 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-03-09 01:15:06.857279 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-03-09 01:15:06.857289 | orchestrator | 2026-03-09 01:15:06.857299 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-03-09 01:15:06.857311 | orchestrator | Monday 09 March 2026 01:13:13 +0000 (0:00:07.720) 0:00:15.687 ********** 2026-03-09 01:15:06.857345 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-09 01:15:06.857356 | orchestrator | 2026-03-09 01:15:06.857366 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-03-09 01:15:06.857376 | orchestrator | Monday 09 March 2026 01:13:17 +0000 (0:00:04.007) 0:00:19.694 ********** 2026-03-09 01:15:06.857386 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-03-09 01:15:06.857419 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-09 01:15:06.857576 | orchestrator | 2026-03-09 01:15:06.857593 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-03-09 01:15:06.857623 | orchestrator | Monday 09 March 2026 01:13:21 +0000 (0:00:04.609) 0:00:24.304 ********** 2026-03-09 01:15:06.857633 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-09 01:15:06.857643 | orchestrator | 2026-03-09 01:15:06.857653 | orchestrator | TASK [service-ks-register : magnum | Granting/revoking user roles] ************* 2026-03-09 01:15:06.857670 | orchestrator | Monday 09 March 2026 01:13:25 +0000 (0:00:03.689) 0:00:27.993 ********** 2026-03-09 01:15:06.857698 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-03-09 01:15:06.857715 | orchestrator | 2026-03-09 01:15:06.857732 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-03-09 01:15:06.857747 | orchestrator | Monday 09 March 2026 01:13:29 +0000 (0:00:04.159) 0:00:32.153 ********** 2026-03-09 01:15:06.857763 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:15:06.857780 | orchestrator | 2026-03-09 01:15:06.857796 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-03-09 01:15:06.857814 | orchestrator | Monday 09 March 2026 01:13:33 +0000 (0:00:03.843) 0:00:35.997 ********** 2026-03-09 01:15:06.857831 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:15:06.857849 | orchestrator | 2026-03-09 01:15:06.857867 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-03-09 01:15:06.857884 | orchestrator | Monday 09 March 2026 01:13:38 +0000 (0:00:04.734) 0:00:40.731 ********** 2026-03-09 01:15:06.857902 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:15:06.857918 | orchestrator | 2026-03-09 01:15:06.857928 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-03-09 01:15:06.857939 | orchestrator | Monday 09 March 2026 01:13:42 +0000 (0:00:04.225) 0:00:44.957 ********** 2026-03-09 01:15:06.857988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:15:06.858072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:15:06.858119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:15:06.858132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:15:06.858144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:15:06.858168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:15:06.858179 | orchestrator | 2026-03-09 01:15:06.858189 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-03-09 01:15:06.858199 | orchestrator | Monday 09 March 2026 01:13:44 +0000 (0:00:02.058) 0:00:47.015 ********** 2026-03-09 01:15:06.858211 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:15:06.858223 | orchestrator | 2026-03-09 01:15:06.858234 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-03-09 01:15:06.858246 | orchestrator | Monday 09 March 2026 01:13:44 +0000 (0:00:00.144) 0:00:47.160 ********** 2026-03-09 01:15:06.858257 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:15:06.858269 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:15:06.858280 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:15:06.858295 | orchestrator | 2026-03-09 01:15:06.858305 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-03-09 01:15:06.858315 | orchestrator | Monday 09 March 2026 01:13:45 +0000 (0:00:00.678) 0:00:47.838 ********** 2026-03-09 01:15:06.858324 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 01:15:06.858334 | orchestrator | 2026-03-09 01:15:06.858343 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-03-09 01:15:06.858353 | orchestrator | Monday 09 March 2026 01:13:46 +0000 (0:00:01.033) 0:00:48.871 ********** 2026-03-09 01:15:06.858368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:15:06.858380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:15:06.858399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:15:06.858410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:15:06.858473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:15:06.858491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:15:06.858502 | orchestrator | 2026-03-09 01:15:06.858512 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-03-09 01:15:06.858522 | orchestrator | Monday 09 March 2026 01:13:49 +0000 (0:00:02.990) 0:00:51.862 ********** 2026-03-09 01:15:06.858532 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:15:06.858549 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:15:06.858561 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:15:06.858571 | orchestrator | 2026-03-09 01:15:06.858580 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-09 01:15:06.858590 | orchestrator | Monday 09 March 2026 01:13:49 +0000 (0:00:00.310) 0:00:52.173 ********** 2026-03-09 01:15:06.858600 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:15:06.858610 | orchestrator | 2026-03-09 01:15:06.858619 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-03-09 01:15:06.858629 | orchestrator | Monday 09 March 2026 01:13:50 +0000 (0:00:00.900) 0:00:53.073 ********** 2026-03-09 01:15:06.858646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:15:06.858658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:15:06.858681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:15:06.858692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:15:06.858703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:15:06.858719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:15:06.858736 | orchestrator | 2026-03-09 01:15:06.858746 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-03-09 01:15:06.858756 | orchestrator | Monday 09 March 2026 01:13:53 +0000 (0:00:02.955) 0:00:56.029 ********** 2026-03-09 01:15:06.858766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:15:06.858782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:15:06.858792 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:15:06.858803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:15:06.858847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:15:06.858874 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:15:06.858901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:15:06.858919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:15:06.858935 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:15:06.858951 | orchestrator | 2026-03-09 01:15:06.858967 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-03-09 01:15:06.858982 | orchestrator | Monday 09 March 2026 01:13:54 +0000 (0:00:00.994) 0:00:57.024 ********** 2026-03-09 01:15:06.859005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:15:06.859024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:15:06.859059 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:15:06.859088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:15:06.859104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:15:06.859120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:15:06.859131 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:15:06.859141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:15:06.859151 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:15:06.859162 | orchestrator | 2026-03-09 01:15:06.859171 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-03-09 01:15:06.859181 | orchestrator | Monday 09 March 2026 01:13:56 +0000 (0:00:01.773) 0:00:58.798 ********** 2026-03-09 01:15:06.859205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:15:06.859217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:15:06.859233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:15:06.859244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:15:06.859255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:15:06.859279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:15:06.859289 | orchestrator | 2026-03-09 01:15:06.859299 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-03-09 01:15:06.859309 | orchestrator | Monday 09 March 2026 01:13:59 +0000 (0:00:03.055) 0:01:01.853 ********** 2026-03-09 01:15:06.859319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:15:06.859335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:15:06.859347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:15:06.859368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:15:06.859379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:15:06.859390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:15:06.859400 | orchestrator | 2026-03-09 01:15:06.859410 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-03-09 01:15:06.859420 | orchestrator | Monday 09 March 2026 01:14:07 +0000 (0:00:08.110) 0:01:09.963 ********** 2026-03-09 01:15:06.859464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:15:06.859483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:15:06.859493 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:15:06.859510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:15:06.859521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:15:06.859531 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:15:06.859546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:15:06.859557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:15:06.859573 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:15:06.859583 | orchestrator | 2026-03-09 01:15:06.859593 | orchestrator | TASK [service-check-containers : magnum | Check containers] ******************** 2026-03-09 01:15:06.859603 | orchestrator | Monday 09 March 2026 01:14:07 +0000 (0:00:00.681) 0:01:10.645 ********** 2026-03-09 01:15:06.859618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:15:06.859629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:15:06.859645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:15:06.859656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:15:06.859673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:15:06.859689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:15:06.859699 | orchestrator | 2026-03-09 01:15:06.859709 | orchestrator | TASK [service-check-containers : magnum | Notify handlers to restart containers] *** 2026-03-09 01:15:06.859719 | orchestrator | Monday 09 March 2026 01:14:10 +0000 (0:00:02.465) 0:01:13.110 ********** 2026-03-09 01:15:06.859729 | orchestrator | changed: [testbed-node-0] => { 2026-03-09 01:15:06.859746 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:15:06.859762 | orchestrator | } 2026-03-09 01:15:06.859778 | orchestrator | changed: [testbed-node-1] => { 2026-03-09 01:15:06.859794 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:15:06.859810 | orchestrator | } 2026-03-09 01:15:06.859823 | orchestrator | changed: [testbed-node-2] => { 2026-03-09 01:15:06.859837 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:15:06.859851 | orchestrator | } 2026-03-09 01:15:06.859866 | orchestrator | 2026-03-09 01:15:06.859880 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-09 01:15:06.859894 | orchestrator | Monday 09 March 2026 01:14:10 +0000 (0:00:00.410) 0:01:13.520 ********** 2026-03-09 01:15:06.859916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:15:06.859944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:15:06.859960 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:15:06.859977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:15:06.860006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:15:06.860023 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:15:06.860040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:15:06.860070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:15:06.860097 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:15:06.860113 | orchestrator | 2026-03-09 01:15:06.860127 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-09 01:15:06.860143 | orchestrator | Monday 09 March 2026 01:14:11 +0000 (0:00:00.864) 0:01:14.385 ********** 2026-03-09 01:15:06.860158 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:15:06.860174 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:15:06.860190 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:15:06.860205 | orchestrator | 2026-03-09 01:15:06.860219 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-03-09 01:15:06.860235 | orchestrator | Monday 09 March 2026 01:14:12 +0000 (0:00:00.583) 0:01:14.968 ********** 2026-03-09 01:15:06.860251 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:15:06.860267 | orchestrator | 2026-03-09 01:15:06.860283 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-03-09 01:15:06.860300 | orchestrator | Monday 09 March 2026 01:14:14 +0000 (0:00:02.198) 0:01:17.167 ********** 2026-03-09 01:15:06.860316 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:15:06.860332 | orchestrator | 2026-03-09 01:15:06.860348 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-03-09 01:15:06.860364 | orchestrator | Monday 09 March 2026 01:14:16 +0000 (0:00:02.169) 0:01:19.337 ********** 2026-03-09 01:15:06.860380 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:15:06.860397 | orchestrator | 2026-03-09 01:15:06.860413 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-09 01:15:06.860470 | orchestrator | Monday 09 March 2026 01:14:32 +0000 (0:00:16.007) 0:01:35.344 ********** 2026-03-09 01:15:06.860489 | orchestrator | 2026-03-09 01:15:06.860499 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-09 01:15:06.860509 | orchestrator | Monday 09 March 2026 01:14:32 +0000 (0:00:00.075) 0:01:35.420 ********** 2026-03-09 01:15:06.860518 | orchestrator | 2026-03-09 01:15:06.860528 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-09 01:15:06.860538 | orchestrator | Monday 09 March 2026 01:14:32 +0000 (0:00:00.104) 0:01:35.525 ********** 2026-03-09 01:15:06.860547 | orchestrator | 2026-03-09 01:15:06.860557 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-03-09 01:15:06.860566 | orchestrator | Monday 09 March 2026 01:14:32 +0000 (0:00:00.075) 0:01:35.600 ********** 2026-03-09 01:15:06.860576 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:15:06.860586 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:15:06.860595 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:15:06.860605 | orchestrator | 2026-03-09 01:15:06.860617 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-03-09 01:15:06.860637 | orchestrator | Monday 09 March 2026 01:14:46 +0000 (0:00:13.998) 0:01:49.599 ********** 2026-03-09 01:15:06.860662 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:15:06.860690 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:15:06.860707 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:15:06.860722 | orchestrator | 2026-03-09 01:15:06.860738 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:15:06.860753 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 01:15:06.860771 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-09 01:15:06.860797 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-09 01:15:06.860814 | orchestrator | 2026-03-09 01:15:06.860831 | orchestrator | 2026-03-09 01:15:06.860846 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:15:06.860863 | orchestrator | Monday 09 March 2026 01:15:03 +0000 (0:00:16.915) 0:02:06.514 ********** 2026-03-09 01:15:06.860880 | orchestrator | =============================================================================== 2026-03-09 01:15:06.860896 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 16.92s 2026-03-09 01:15:06.860912 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.01s 2026-03-09 01:15:06.860929 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 14.00s 2026-03-09 01:15:06.860945 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 8.11s 2026-03-09 01:15:06.860962 | orchestrator | service-ks-register : magnum | Creating/deleting endpoints -------------- 7.72s 2026-03-09 01:15:06.860979 | orchestrator | service-ks-register : magnum | Creating/deleting services --------------- 4.92s 2026-03-09 01:15:06.860997 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.73s 2026-03-09 01:15:06.861013 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.61s 2026-03-09 01:15:06.861029 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 4.23s 2026-03-09 01:15:06.861046 | orchestrator | service-ks-register : magnum | Granting/revoking user roles ------------- 4.16s 2026-03-09 01:15:06.861062 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 4.01s 2026-03-09 01:15:06.861078 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.84s 2026-03-09 01:15:06.861102 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.69s 2026-03-09 01:15:06.861120 | orchestrator | magnum : Copying over config.json files for services -------------------- 3.06s 2026-03-09 01:15:06.861137 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.99s 2026-03-09 01:15:06.861153 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.96s 2026-03-09 01:15:06.861206 | orchestrator | service-check-containers : magnum | Check containers -------------------- 2.47s 2026-03-09 01:15:06.861549 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.20s 2026-03-09 01:15:06.861576 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.17s 2026-03-09 01:15:06.861622 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 2.06s 2026-03-09 01:15:06.861639 | orchestrator | 2026-03-09 01:15:06 | INFO  | Task 88be52dc-de1a-4bc3-8e5c-889d50c056cf is in state SUCCESS 2026-03-09 01:15:06.861666 | orchestrator | 2026-03-09 01:15:06 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:15:06.863789 | orchestrator | 2026-03-09 01:15:06 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:15:06.863875 | orchestrator | 2026-03-09 01:15:06 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:15:09.914952 | orchestrator | 2026-03-09 01:15:09 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:15:09.916861 | orchestrator | 2026-03-09 01:15:09 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:15:09.918999 | orchestrator | 2026-03-09 01:15:09 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:15:09.919293 | orchestrator | 2026-03-09 01:15:09 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:15:12.964378 | orchestrator | 2026-03-09 01:15:12 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:15:12.965445 | orchestrator | 2026-03-09 01:15:12 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:15:12.966209 | orchestrator | 2026-03-09 01:15:12 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:15:12.966242 | orchestrator | 2026-03-09 01:15:12 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:15:16.054780 | orchestrator | 2026-03-09 01:15:16 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:15:16.058665 | orchestrator | 2026-03-09 01:15:16 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:15:16.061854 | orchestrator | 2026-03-09 01:15:16 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:15:16.061937 | orchestrator | 2026-03-09 01:15:16 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:15:19.119744 | orchestrator | 2026-03-09 01:15:19 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:15:19.120815 | orchestrator | 2026-03-09 01:15:19 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:15:19.122468 | orchestrator | 2026-03-09 01:15:19 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:15:19.122509 | orchestrator | 2026-03-09 01:15:19 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:15:22.211050 | orchestrator | 2026-03-09 01:15:22 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:15:22.211173 | orchestrator | 2026-03-09 01:15:22 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:15:22.211199 | orchestrator | 2026-03-09 01:15:22 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:15:22.211218 | orchestrator | 2026-03-09 01:15:22 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:15:25.271266 | orchestrator | 2026-03-09 01:15:25 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:15:25.272344 | orchestrator | 2026-03-09 01:15:25 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:15:25.273113 | orchestrator | 2026-03-09 01:15:25 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:15:25.273154 | orchestrator | 2026-03-09 01:15:25 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:15:28.322897 | orchestrator | 2026-03-09 01:15:28 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:15:28.324489 | orchestrator | 2026-03-09 01:15:28 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:15:28.326004 | orchestrator | 2026-03-09 01:15:28 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:15:28.326301 | orchestrator | 2026-03-09 01:15:28 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:15:31.378865 | orchestrator | 2026-03-09 01:15:31 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:15:31.379403 | orchestrator | 2026-03-09 01:15:31 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:15:31.381478 | orchestrator | 2026-03-09 01:15:31 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:15:31.381516 | orchestrator | 2026-03-09 01:15:31 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:15:34.427928 | orchestrator | 2026-03-09 01:15:34 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:15:34.428861 | orchestrator | 2026-03-09 01:15:34 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:15:34.431592 | orchestrator | 2026-03-09 01:15:34 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:15:34.431673 | orchestrator | 2026-03-09 01:15:34 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:15:37.477497 | orchestrator | 2026-03-09 01:15:37 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:15:37.478957 | orchestrator | 2026-03-09 01:15:37 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:15:37.480383 | orchestrator | 2026-03-09 01:15:37 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:15:37.480462 | orchestrator | 2026-03-09 01:15:37 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:15:40.522149 | orchestrator | 2026-03-09 01:15:40 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:15:40.522728 | orchestrator | 2026-03-09 01:15:40 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:15:40.523705 | orchestrator | 2026-03-09 01:15:40 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:15:40.523743 | orchestrator | 2026-03-09 01:15:40 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:15:43.568506 | orchestrator | 2026-03-09 01:15:43 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:15:43.569629 | orchestrator | 2026-03-09 01:15:43 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:15:43.571551 | orchestrator | 2026-03-09 01:15:43 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:15:43.571579 | orchestrator | 2026-03-09 01:15:43 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:15:46.609559 | orchestrator | 2026-03-09 01:15:46 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:15:46.609997 | orchestrator | 2026-03-09 01:15:46 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:15:46.610779 | orchestrator | 2026-03-09 01:15:46 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:15:46.610819 | orchestrator | 2026-03-09 01:15:46 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:15:49.651392 | orchestrator | 2026-03-09 01:15:49 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:15:49.651613 | orchestrator | 2026-03-09 01:15:49 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:15:49.653648 | orchestrator | 2026-03-09 01:15:49 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:15:49.653689 | orchestrator | 2026-03-09 01:15:49 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:15:52.691838 | orchestrator | 2026-03-09 01:15:52 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:15:52.692619 | orchestrator | 2026-03-09 01:15:52 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:15:52.693677 | orchestrator | 2026-03-09 01:15:52 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:15:52.694323 | orchestrator | 2026-03-09 01:15:52 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:15:55.754631 | orchestrator | 2026-03-09 01:15:55 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:15:55.755097 | orchestrator | 2026-03-09 01:15:55 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:15:55.756250 | orchestrator | 2026-03-09 01:15:55 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:15:55.756266 | orchestrator | 2026-03-09 01:15:55 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:15:58.785069 | orchestrator | 2026-03-09 01:15:58 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:15:58.785148 | orchestrator | 2026-03-09 01:15:58 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:15:58.785159 | orchestrator | 2026-03-09 01:15:58 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:15:58.785167 | orchestrator | 2026-03-09 01:15:58 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:01.823203 | orchestrator | 2026-03-09 01:16:01 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:16:01.823853 | orchestrator | 2026-03-09 01:16:01 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:16:01.826494 | orchestrator | 2026-03-09 01:16:01 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:16:01.826542 | orchestrator | 2026-03-09 01:16:01 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:04.891823 | orchestrator | 2026-03-09 01:16:04 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:16:04.891929 | orchestrator | 2026-03-09 01:16:04 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:16:04.893632 | orchestrator | 2026-03-09 01:16:04 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:16:04.893660 | orchestrator | 2026-03-09 01:16:04 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:07.933056 | orchestrator | 2026-03-09 01:16:07 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:16:07.937236 | orchestrator | 2026-03-09 01:16:07 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:16:07.940477 | orchestrator | 2026-03-09 01:16:07 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state STARTED 2026-03-09 01:16:07.941142 | orchestrator | 2026-03-09 01:16:07 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:10.990319 | orchestrator | 2026-03-09 01:16:10 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:16:10.993901 | orchestrator | 2026-03-09 01:16:10 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:16:10.995837 | orchestrator | 2026-03-09 01:16:10 | INFO  | Task 52e090d1-4319-49de-8476-3fea947e1700 is in state SUCCESS 2026-03-09 01:16:10.995914 | orchestrator | 2026-03-09 01:16:10 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:14.047126 | orchestrator | 2026-03-09 01:16:14 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:16:14.048139 | orchestrator | 2026-03-09 01:16:14 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:16:14.048179 | orchestrator | 2026-03-09 01:16:14 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:17.086497 | orchestrator | 2026-03-09 01:16:17 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:16:17.087497 | orchestrator | 2026-03-09 01:16:17 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:16:17.087528 | orchestrator | 2026-03-09 01:16:17 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:20.135804 | orchestrator | 2026-03-09 01:16:20 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:16:20.136389 | orchestrator | 2026-03-09 01:16:20 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:16:20.136446 | orchestrator | 2026-03-09 01:16:20 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:23.188753 | orchestrator | 2026-03-09 01:16:23 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:16:23.189471 | orchestrator | 2026-03-09 01:16:23 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:16:23.189508 | orchestrator | 2026-03-09 01:16:23 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:26.231034 | orchestrator | 2026-03-09 01:16:26 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:16:26.231892 | orchestrator | 2026-03-09 01:16:26 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:16:26.231951 | orchestrator | 2026-03-09 01:16:26 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:29.275039 | orchestrator | 2026-03-09 01:16:29 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:16:29.277170 | orchestrator | 2026-03-09 01:16:29 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:16:29.277259 | orchestrator | 2026-03-09 01:16:29 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:32.323898 | orchestrator | 2026-03-09 01:16:32 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:16:32.326667 | orchestrator | 2026-03-09 01:16:32 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:16:32.326765 | orchestrator | 2026-03-09 01:16:32 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:35.369796 | orchestrator | 2026-03-09 01:16:35 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:16:35.371329 | orchestrator | 2026-03-09 01:16:35 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:16:35.371387 | orchestrator | 2026-03-09 01:16:35 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:38.411719 | orchestrator | 2026-03-09 01:16:38 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:16:38.412199 | orchestrator | 2026-03-09 01:16:38 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state STARTED 2026-03-09 01:16:38.412231 | orchestrator | 2026-03-09 01:16:38 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:41.451175 | orchestrator | 2026-03-09 01:16:41 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:16:41.459774 | orchestrator | 2026-03-09 01:16:41.459874 | orchestrator | 2026-03-09 01:16:41.459889 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:16:41.459901 | orchestrator | 2026-03-09 01:16:41.459913 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:16:41.459930 | orchestrator | Monday 09 March 2026 01:15:03 +0000 (0:00:00.215) 0:00:00.215 ********** 2026-03-09 01:16:41.459950 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:16:41.459970 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:16:41.459989 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:16:41.460008 | orchestrator | 2026-03-09 01:16:41.460027 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:16:41.460047 | orchestrator | Monday 09 March 2026 01:15:04 +0000 (0:00:00.381) 0:00:00.596 ********** 2026-03-09 01:16:41.460066 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-03-09 01:16:41.460087 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-03-09 01:16:41.460140 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-03-09 01:16:41.460153 | orchestrator | 2026-03-09 01:16:41.460164 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-03-09 01:16:41.460175 | orchestrator | 2026-03-09 01:16:41.460186 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-03-09 01:16:41.460197 | orchestrator | Monday 09 March 2026 01:15:05 +0000 (0:00:00.797) 0:00:01.393 ********** 2026-03-09 01:16:41.460857 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:16:41.460880 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:16:41.460891 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:16:41.460902 | orchestrator | 2026-03-09 01:16:41.460914 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:16:41.460926 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:16:41.460940 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:16:41.460951 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:16:41.460962 | orchestrator | 2026-03-09 01:16:41.460973 | orchestrator | 2026-03-09 01:16:41.460984 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:16:41.460995 | orchestrator | Monday 09 March 2026 01:15:05 +0000 (0:00:00.848) 0:00:02.242 ********** 2026-03-09 01:16:41.461005 | orchestrator | =============================================================================== 2026-03-09 01:16:41.461016 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.85s 2026-03-09 01:16:41.461027 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.80s 2026-03-09 01:16:41.461038 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.38s 2026-03-09 01:16:41.461049 | orchestrator | 2026-03-09 01:16:41.461060 | orchestrator | 2026-03-09 01:16:41.461070 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-03-09 01:16:41.461081 | orchestrator | 2026-03-09 01:16:41.461092 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-03-09 01:16:41.461112 | orchestrator | Monday 09 March 2026 01:10:32 +0000 (0:00:00.165) 0:00:00.165 ********** 2026-03-09 01:16:41.461131 | orchestrator | changed: [localhost] 2026-03-09 01:16:41.461149 | orchestrator | 2026-03-09 01:16:41.461168 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-03-09 01:16:41.461186 | orchestrator | Monday 09 March 2026 01:10:33 +0000 (0:00:01.193) 0:00:01.358 ********** 2026-03-09 01:16:41.461205 | orchestrator | 2026-03-09 01:16:41.461224 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-09 01:16:41.461242 | orchestrator | 2026-03-09 01:16:41.461260 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-09 01:16:41.461279 | orchestrator | 2026-03-09 01:16:41.461318 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-09 01:16:41.461338 | orchestrator | 2026-03-09 01:16:41.461355 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-09 01:16:41.461374 | orchestrator | 2026-03-09 01:16:41.461395 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-09 01:16:41.461484 | orchestrator | 2026-03-09 01:16:41.461505 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-09 01:16:41.461525 | orchestrator | changed: [localhost] 2026-03-09 01:16:41.461544 | orchestrator | 2026-03-09 01:16:41.461559 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-03-09 01:16:41.461572 | orchestrator | Monday 09 March 2026 01:15:51 +0000 (0:05:18.313) 0:05:19.672 ********** 2026-03-09 01:16:41.461585 | orchestrator | changed: [localhost] 2026-03-09 01:16:41.461597 | orchestrator | 2026-03-09 01:16:41.462332 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:16:41.462392 | orchestrator | 2026-03-09 01:16:41.462435 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:16:41.462455 | orchestrator | Monday 09 March 2026 01:16:06 +0000 (0:00:14.717) 0:05:34.390 ********** 2026-03-09 01:16:41.462473 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:16:41.462492 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:16:41.462510 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:16:41.462528 | orchestrator | 2026-03-09 01:16:41.462545 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:16:41.462564 | orchestrator | Monday 09 March 2026 01:16:07 +0000 (0:00:00.358) 0:05:34.748 ********** 2026-03-09 01:16:41.462581 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-03-09 01:16:41.462602 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-03-09 01:16:41.462620 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-03-09 01:16:41.462637 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-03-09 01:16:41.462658 | orchestrator | 2026-03-09 01:16:41.462677 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-03-09 01:16:41.462694 | orchestrator | skipping: no hosts matched 2026-03-09 01:16:41.462714 | orchestrator | 2026-03-09 01:16:41.462783 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:16:41.462796 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:16:41.463358 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:16:41.463386 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:16:41.463424 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:16:41.463445 | orchestrator | 2026-03-09 01:16:41.463463 | orchestrator | 2026-03-09 01:16:41.463481 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:16:41.463498 | orchestrator | Monday 09 March 2026 01:16:07 +0000 (0:00:00.610) 0:05:35.359 ********** 2026-03-09 01:16:41.463516 | orchestrator | =============================================================================== 2026-03-09 01:16:41.463533 | orchestrator | Download ironic-agent initramfs --------------------------------------- 318.31s 2026-03-09 01:16:41.463550 | orchestrator | Download ironic-agent kernel ------------------------------------------- 14.72s 2026-03-09 01:16:41.463568 | orchestrator | Ensure the destination directory exists --------------------------------- 1.19s 2026-03-09 01:16:41.463587 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.61s 2026-03-09 01:16:41.463605 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2026-03-09 01:16:41.463622 | orchestrator | 2026-03-09 01:16:41.463640 | orchestrator | 2026-03-09 01:16:41.463657 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:16:41.463675 | orchestrator | 2026-03-09 01:16:41.463766 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-03-09 01:16:41.463787 | orchestrator | Monday 09 March 2026 01:04:57 +0000 (0:00:00.315) 0:00:00.315 ********** 2026-03-09 01:16:41.463805 | orchestrator | changed: [testbed-manager] 2026-03-09 01:16:41.463906 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:16:41.463926 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:16:41.463952 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:16:41.463978 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:16:41.463995 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:16:41.464013 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:16:41.464031 | orchestrator | 2026-03-09 01:16:41.464050 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:16:41.464087 | orchestrator | Monday 09 March 2026 01:04:59 +0000 (0:00:01.659) 0:00:01.975 ********** 2026-03-09 01:16:41.464107 | orchestrator | changed: [testbed-manager] 2026-03-09 01:16:41.464125 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:16:41.464146 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:16:41.464166 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:16:41.464185 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:16:41.464205 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:16:41.464218 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:16:41.464253 | orchestrator | 2026-03-09 01:16:41.464277 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:16:41.464288 | orchestrator | Monday 09 March 2026 01:05:00 +0000 (0:00:01.560) 0:00:03.535 ********** 2026-03-09 01:16:41.464299 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-03-09 01:16:41.464311 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-03-09 01:16:41.464322 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-03-09 01:16:41.464333 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-03-09 01:16:41.464356 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-03-09 01:16:41.464367 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-03-09 01:16:41.464378 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-03-09 01:16:41.464389 | orchestrator | 2026-03-09 01:16:41.464417 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-03-09 01:16:41.464429 | orchestrator | 2026-03-09 01:16:41.464440 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-09 01:16:41.464452 | orchestrator | Monday 09 March 2026 01:05:03 +0000 (0:00:02.412) 0:00:05.948 ********** 2026-03-09 01:16:41.464463 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:16:41.464473 | orchestrator | 2026-03-09 01:16:41.464484 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-03-09 01:16:41.464808 | orchestrator | Monday 09 March 2026 01:05:04 +0000 (0:00:00.972) 0:00:06.920 ********** 2026-03-09 01:16:41.464821 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-03-09 01:16:41.464833 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-03-09 01:16:41.464844 | orchestrator | 2026-03-09 01:16:41.464855 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-03-09 01:16:41.464871 | orchestrator | Monday 09 March 2026 01:05:08 +0000 (0:00:04.697) 0:00:11.618 ********** 2026-03-09 01:16:41.464889 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-09 01:16:41.464904 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-09 01:16:41.464919 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:16:41.464944 | orchestrator | 2026-03-09 01:16:41.464964 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-09 01:16:41.464982 | orchestrator | Monday 09 March 2026 01:05:13 +0000 (0:00:04.563) 0:00:16.181 ********** 2026-03-09 01:16:41.464999 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:16:41.465016 | orchestrator | 2026-03-09 01:16:41.465034 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-03-09 01:16:41.465050 | orchestrator | Monday 09 March 2026 01:05:14 +0000 (0:00:00.797) 0:00:16.981 ********** 2026-03-09 01:16:41.465211 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:16:41.465233 | orchestrator | 2026-03-09 01:16:41.465250 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-03-09 01:16:41.465267 | orchestrator | Monday 09 March 2026 01:05:15 +0000 (0:00:01.612) 0:00:18.593 ********** 2026-03-09 01:16:41.465283 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:16:41.465298 | orchestrator | 2026-03-09 01:16:41.465316 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-09 01:16:41.465333 | orchestrator | Monday 09 March 2026 01:05:22 +0000 (0:00:06.330) 0:00:24.924 ********** 2026-03-09 01:16:41.465351 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.465803 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.465833 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.465852 | orchestrator | 2026-03-09 01:16:41.465869 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-09 01:16:41.465886 | orchestrator | Monday 09 March 2026 01:05:22 +0000 (0:00:00.429) 0:00:25.353 ********** 2026-03-09 01:16:41.465905 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:16:41.465923 | orchestrator | 2026-03-09 01:16:41.465940 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-03-09 01:16:41.465959 | orchestrator | Monday 09 March 2026 01:05:58 +0000 (0:00:35.968) 0:01:01.322 ********** 2026-03-09 01:16:41.465976 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:16:41.465995 | orchestrator | 2026-03-09 01:16:41.466012 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-09 01:16:41.466079 | orchestrator | Monday 09 March 2026 01:06:17 +0000 (0:00:19.038) 0:01:20.361 ********** 2026-03-09 01:16:41.466108 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:16:41.466129 | orchestrator | 2026-03-09 01:16:41.466148 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-09 01:16:41.466166 | orchestrator | Monday 09 March 2026 01:06:31 +0000 (0:00:14.169) 0:01:34.530 ********** 2026-03-09 01:16:41.466184 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:16:41.466201 | orchestrator | 2026-03-09 01:16:41.466219 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-03-09 01:16:41.466238 | orchestrator | Monday 09 March 2026 01:06:33 +0000 (0:00:01.961) 0:01:36.491 ********** 2026-03-09 01:16:41.466255 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.466275 | orchestrator | 2026-03-09 01:16:41.466293 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-09 01:16:41.466312 | orchestrator | Monday 09 March 2026 01:06:35 +0000 (0:00:02.078) 0:01:38.570 ********** 2026-03-09 01:16:41.466331 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:16:41.466350 | orchestrator | 2026-03-09 01:16:41.466368 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-09 01:16:41.466387 | orchestrator | Monday 09 March 2026 01:06:37 +0000 (0:00:01.223) 0:01:39.793 ********** 2026-03-09 01:16:41.466451 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:16:41.466471 | orchestrator | 2026-03-09 01:16:41.466490 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-09 01:16:41.466509 | orchestrator | Monday 09 March 2026 01:06:58 +0000 (0:00:21.339) 0:02:01.134 ********** 2026-03-09 01:16:41.466528 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.466547 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.466565 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.466584 | orchestrator | 2026-03-09 01:16:41.466603 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-03-09 01:16:41.466621 | orchestrator | 2026-03-09 01:16:41.466640 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-09 01:16:41.466659 | orchestrator | Monday 09 March 2026 01:06:58 +0000 (0:00:00.322) 0:02:01.457 ********** 2026-03-09 01:16:41.466678 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:16:41.466696 | orchestrator | 2026-03-09 01:16:41.466730 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-03-09 01:16:41.466749 | orchestrator | Monday 09 March 2026 01:06:59 +0000 (0:00:00.626) 0:02:02.083 ********** 2026-03-09 01:16:41.466768 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.466787 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.466806 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:16:41.466825 | orchestrator | 2026-03-09 01:16:41.466845 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-03-09 01:16:41.466863 | orchestrator | Monday 09 March 2026 01:07:01 +0000 (0:00:02.282) 0:02:04.366 ********** 2026-03-09 01:16:41.466896 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.466908 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.466922 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:16:41.466940 | orchestrator | 2026-03-09 01:16:41.466966 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-09 01:16:41.466988 | orchestrator | Monday 09 March 2026 01:07:04 +0000 (0:00:02.736) 0:02:07.102 ********** 2026-03-09 01:16:41.467005 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.467023 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.467040 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.467056 | orchestrator | 2026-03-09 01:16:41.467072 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-09 01:16:41.467089 | orchestrator | Monday 09 March 2026 01:07:04 +0000 (0:00:00.541) 0:02:07.644 ********** 2026-03-09 01:16:41.467104 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-09 01:16:41.467121 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.467138 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-09 01:16:41.467154 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.467172 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-09 01:16:41.467190 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-03-09 01:16:41.467208 | orchestrator | 2026-03-09 01:16:41.467224 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-09 01:16:41.467242 | orchestrator | Monday 09 March 2026 01:07:18 +0000 (0:00:14.001) 0:02:21.645 ********** 2026-03-09 01:16:41.467261 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.467466 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.467486 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.467497 | orchestrator | 2026-03-09 01:16:41.467508 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-09 01:16:41.467519 | orchestrator | Monday 09 March 2026 01:07:19 +0000 (0:00:00.480) 0:02:22.126 ********** 2026-03-09 01:16:41.467530 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-09 01:16:41.467541 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.467552 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-09 01:16:41.467563 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.467574 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-09 01:16:41.467585 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.467595 | orchestrator | 2026-03-09 01:16:41.467606 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-09 01:16:41.467617 | orchestrator | Monday 09 March 2026 01:07:20 +0000 (0:00:00.981) 0:02:23.108 ********** 2026-03-09 01:16:41.467628 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.467639 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.467649 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:16:41.467660 | orchestrator | 2026-03-09 01:16:41.467671 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-03-09 01:16:41.467681 | orchestrator | Monday 09 March 2026 01:07:21 +0000 (0:00:00.828) 0:02:23.936 ********** 2026-03-09 01:16:41.467692 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.467703 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.467713 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:16:41.467724 | orchestrator | 2026-03-09 01:16:41.467735 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-03-09 01:16:41.467746 | orchestrator | Monday 09 March 2026 01:07:22 +0000 (0:00:01.093) 0:02:25.030 ********** 2026-03-09 01:16:41.467757 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.467767 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.467778 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:16:41.467789 | orchestrator | 2026-03-09 01:16:41.467800 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-03-09 01:16:41.467810 | orchestrator | Monday 09 March 2026 01:07:24 +0000 (0:00:02.597) 0:02:27.627 ********** 2026-03-09 01:16:41.467836 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.467851 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.467870 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:16:41.467888 | orchestrator | 2026-03-09 01:16:41.467908 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-09 01:16:41.467928 | orchestrator | Monday 09 March 2026 01:07:48 +0000 (0:00:23.577) 0:02:51.204 ********** 2026-03-09 01:16:41.467947 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.467965 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.467984 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:16:41.468002 | orchestrator | 2026-03-09 01:16:41.468021 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-09 01:16:41.468039 | orchestrator | Monday 09 March 2026 01:08:02 +0000 (0:00:14.400) 0:03:05.605 ********** 2026-03-09 01:16:41.468057 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:16:41.468076 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.468095 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.468113 | orchestrator | 2026-03-09 01:16:41.468130 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-03-09 01:16:41.468148 | orchestrator | Monday 09 March 2026 01:08:03 +0000 (0:00:00.943) 0:03:06.549 ********** 2026-03-09 01:16:41.468166 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.468186 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.468205 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:16:41.468226 | orchestrator | 2026-03-09 01:16:41.468283 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-03-09 01:16:41.468303 | orchestrator | Monday 09 March 2026 01:08:16 +0000 (0:00:12.321) 0:03:18.870 ********** 2026-03-09 01:16:41.468322 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.468351 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.468363 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.468374 | orchestrator | 2026-03-09 01:16:41.468385 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-09 01:16:41.468396 | orchestrator | Monday 09 March 2026 01:08:17 +0000 (0:00:01.219) 0:03:20.089 ********** 2026-03-09 01:16:41.468437 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.468448 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.468459 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.468470 | orchestrator | 2026-03-09 01:16:41.468481 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-03-09 01:16:41.468491 | orchestrator | 2026-03-09 01:16:41.468502 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-09 01:16:41.468513 | orchestrator | Monday 09 March 2026 01:08:18 +0000 (0:00:01.015) 0:03:21.106 ********** 2026-03-09 01:16:41.468524 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:16:41.468536 | orchestrator | 2026-03-09 01:16:41.468547 | orchestrator | TASK [service-ks-register : nova | Creating/deleting services] ***************** 2026-03-09 01:16:41.468557 | orchestrator | Monday 09 March 2026 01:08:19 +0000 (0:00:00.878) 0:03:21.984 ********** 2026-03-09 01:16:41.468568 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-03-09 01:16:41.468579 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-03-09 01:16:41.468591 | orchestrator | 2026-03-09 01:16:41.468601 | orchestrator | TASK [service-ks-register : nova | Creating/deleting endpoints] **************** 2026-03-09 01:16:41.468612 | orchestrator | Monday 09 March 2026 01:08:23 +0000 (0:00:03.796) 0:03:25.781 ********** 2026-03-09 01:16:41.468623 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-03-09 01:16:41.468635 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-03-09 01:16:41.468763 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-03-09 01:16:41.468792 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-03-09 01:16:41.468803 | orchestrator | 2026-03-09 01:16:41.468824 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-03-09 01:16:41.468846 | orchestrator | Monday 09 March 2026 01:08:30 +0000 (0:00:07.721) 0:03:33.503 ********** 2026-03-09 01:16:41.468874 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-09 01:16:41.468893 | orchestrator | 2026-03-09 01:16:41.468913 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-03-09 01:16:41.468931 | orchestrator | Monday 09 March 2026 01:08:34 +0000 (0:00:03.975) 0:03:37.478 ********** 2026-03-09 01:16:41.468950 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-03-09 01:16:41.468968 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-09 01:16:41.468988 | orchestrator | 2026-03-09 01:16:41.469004 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-03-09 01:16:41.469023 | orchestrator | Monday 09 March 2026 01:08:39 +0000 (0:00:05.210) 0:03:42.688 ********** 2026-03-09 01:16:41.469040 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-09 01:16:41.469058 | orchestrator | 2026-03-09 01:16:41.469075 | orchestrator | TASK [service-ks-register : nova | Granting/revoking user roles] *************** 2026-03-09 01:16:41.469093 | orchestrator | Monday 09 March 2026 01:08:43 +0000 (0:00:03.875) 0:03:46.564 ********** 2026-03-09 01:16:41.469112 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-03-09 01:16:41.469132 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-03-09 01:16:41.469152 | orchestrator | 2026-03-09 01:16:41.469172 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-09 01:16:41.469189 | orchestrator | Monday 09 March 2026 01:08:52 +0000 (0:00:08.695) 0:03:55.260 ********** 2026-03-09 01:16:41.469216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:16:41.469255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:16:41.469462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:16:41.469495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:16:41.469518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:16:41.469548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.469680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:16:41.469736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.469757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.469775 | orchestrator | 2026-03-09 01:16:41.469793 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-03-09 01:16:41.469812 | orchestrator | Monday 09 March 2026 01:08:55 +0000 (0:00:03.180) 0:03:58.440 ********** 2026-03-09 01:16:41.469829 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.469847 | orchestrator | 2026-03-09 01:16:41.469865 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-03-09 01:16:41.469883 | orchestrator | Monday 09 March 2026 01:08:55 +0000 (0:00:00.155) 0:03:58.595 ********** 2026-03-09 01:16:41.469901 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.469919 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.469937 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.469955 | orchestrator | 2026-03-09 01:16:41.469973 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-03-09 01:16:41.469992 | orchestrator | Monday 09 March 2026 01:08:57 +0000 (0:00:01.471) 0:04:00.066 ********** 2026-03-09 01:16:41.470011 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 01:16:41.470079 | orchestrator | 2026-03-09 01:16:41.470098 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-03-09 01:16:41.470116 | orchestrator | Monday 09 March 2026 01:08:58 +0000 (0:00:01.246) 0:04:01.313 ********** 2026-03-09 01:16:41.470133 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.470150 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.470167 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.470186 | orchestrator | 2026-03-09 01:16:41.470204 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-09 01:16:41.470221 | orchestrator | Monday 09 March 2026 01:08:59 +0000 (0:00:00.494) 0:04:01.808 ********** 2026-03-09 01:16:41.470239 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:16:41.470257 | orchestrator | 2026-03-09 01:16:41.470292 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-09 01:16:41.470346 | orchestrator | Monday 09 March 2026 01:09:00 +0000 (0:00:01.160) 0:04:02.969 ********** 2026-03-09 01:16:41.470381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:16:41.470663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:16:41.470704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:16:41.470735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:16:41.470773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:16:41.470917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/time2026-03-09 01:16:41 | INFO  | Task 7946169c-93a2-4791-bee6-1826068a5621 is in state SUCCESS 2026-03-09 01:16:41.470940 | orchestrator | zone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:16:41.470953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.470965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.470977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.471001 | orchestrator | 2026-03-09 01:16:41.471017 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-09 01:16:41.471028 | orchestrator | Monday 09 March 2026 01:09:06 +0000 (0:00:06.567) 0:04:09.536 ********** 2026-03-09 01:16:41.471038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:16:41.471121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:16:41.471136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:16:41.471147 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.471159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:16:41.471183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:16:41.471194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:16:41.471270 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.471285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:16:41.471296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:16:41.471320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:16:41.471330 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.471340 | orchestrator | 2026-03-09 01:16:41.471350 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-09 01:16:41.471359 | orchestrator | Monday 09 March 2026 01:09:08 +0000 (0:00:01.748) 0:04:11.284 ********** 2026-03-09 01:16:41.471370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:16:41.471503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:16:41.471521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:16:41.471539 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.471550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:16:41.471566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:16:41.471659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:16:41.471681 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.471699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:16:41.471716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:16:41.471751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:16:41.471769 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.471785 | orchestrator | 2026-03-09 01:16:41.471802 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-03-09 01:16:41.471819 | orchestrator | Monday 09 March 2026 01:09:11 +0000 (0:00:02.698) 0:04:13.982 ********** 2026-03-09 01:16:41.471939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:16:41.471964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:16:41.472009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:16:41.472055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:16:41.472075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.472198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:16:41.472215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:16:41.472235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.472252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.472262 | orchestrator | 2026-03-09 01:16:41.472272 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-03-09 01:16:41.472282 | orchestrator | Monday 09 March 2026 01:09:17 +0000 (0:00:06.166) 0:04:20.148 ********** 2026-03-09 01:16:41.472355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:16:41.472370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:16:41.472388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:16:41.472433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:16:41.472531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:16:41.472556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:16:41.472614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.472633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.472657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.472674 | orchestrator | 2026-03-09 01:16:41.472691 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-03-09 01:16:41.472706 | orchestrator | Monday 09 March 2026 01:09:32 +0000 (0:00:15.402) 0:04:35.551 ********** 2026-03-09 01:16:41.472831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:16:41.472857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:16:41.472888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:16:41.472933 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.472961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:16:41.472982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:16:41.473054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:16:41.473088 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.473106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:16:41.473124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:16:41.473149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:16:41.473167 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.473180 | orchestrator | 2026-03-09 01:16:41.473190 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-03-09 01:16:41.473200 | orchestrator | Monday 09 March 2026 01:09:34 +0000 (0:00:01.540) 0:04:37.091 ********** 2026-03-09 01:16:41.473210 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.473219 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.473229 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.473239 | orchestrator | 2026-03-09 01:16:41.473248 | orchestrator | TASK [nova : Copying over nova-metadata-wsgi.conf] ***************************** 2026-03-09 01:16:41.473258 | orchestrator | Monday 09 March 2026 01:09:36 +0000 (0:00:01.712) 0:04:38.804 ********** 2026-03-09 01:16:41.473267 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.473277 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.473286 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.473296 | orchestrator | 2026-03-09 01:16:41.473306 | orchestrator | TASK [nova : Copying over vendordata file for nova services] ******************* 2026-03-09 01:16:41.473315 | orchestrator | Monday 09 March 2026 01:09:37 +0000 (0:00:01.557) 0:04:40.362 ********** 2026-03-09 01:16:41.473332 | orchestrator | skipping: [testbed-node-0] => (item=nova-metadata)  2026-03-09 01:16:41.473342 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-03-09 01:16:41.473352 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.473362 | orchestrator | skipping: [testbed-node-1] => (item=nova-metadata)  2026-03-09 01:16:41.473372 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-03-09 01:16:41.473488 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.473503 | orchestrator | skipping: [testbed-node-2] => (item=nova-metadata)  2026-03-09 01:16:41.473516 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-03-09 01:16:41.473528 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.473538 | orchestrator | 2026-03-09 01:16:41.473550 | orchestrator | TASK [Configure uWSGI for Nova] ************************************************ 2026-03-09 01:16:41.473561 | orchestrator | Monday 09 March 2026 01:09:38 +0000 (0:00:00.862) 0:04:41.224 ********** 2026-03-09 01:16:41.473573 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova-api', 'port': '8774', 'workers': '2'}) 2026-03-09 01:16:41.473587 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova-metadata', 'port': '8775', 'workers': '2'}) 2026-03-09 01:16:41.473598 | orchestrator | 2026-03-09 01:16:41.473609 | orchestrator | TASK [service-uwsgi-config : Copying over nova-api uWSGI config] *************** 2026-03-09 01:16:41.473621 | orchestrator | Monday 09 March 2026 01:09:41 +0000 (0:00:03.015) 0:04:44.240 ********** 2026-03-09 01:16:41.473632 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:16:41.473643 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:16:41.473654 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:16:41.473665 | orchestrator | 2026-03-09 01:16:41.473676 | orchestrator | TASK [service-uwsgi-config : Copying over nova-metadata uWSGI config] ********** 2026-03-09 01:16:41.473687 | orchestrator | Monday 09 March 2026 01:09:45 +0000 (0:00:03.803) 0:04:48.043 ********** 2026-03-09 01:16:41.473698 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:16:41.473709 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:16:41.473720 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:16:41.473730 | orchestrator | 2026-03-09 01:16:41.473746 | orchestrator | TASK [service-check-containers : nova | Check containers] ********************** 2026-03-09 01:16:41.473758 | orchestrator | Monday 09 March 2026 01:09:48 +0000 (0:00:03.557) 0:04:51.601 ********** 2026-03-09 01:16:41.473771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:16:41.473791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:16:41.473840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:16:41.473853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:16:41.473864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:16:41.473884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.473902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.473941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-09 01:16:41.473953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.473964 | orchestrator | 2026-03-09 01:16:41.473973 | orchestrator | TASK [service-check-containers : nova | Notify handlers to restart containers] *** 2026-03-09 01:16:41.473983 | orchestrator | Monday 09 March 2026 01:09:54 +0000 (0:00:05.910) 0:04:57.512 ********** 2026-03-09 01:16:41.473992 | orchestrator | changed: [testbed-node-0] => { 2026-03-09 01:16:41.474000 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:16:41.474008 | orchestrator | } 2026-03-09 01:16:41.474043 | orchestrator | changed: [testbed-node-1] => { 2026-03-09 01:16:41.474054 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:16:41.474061 | orchestrator | } 2026-03-09 01:16:41.474069 | orchestrator | changed: [testbed-node-2] => { 2026-03-09 01:16:41.474077 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:16:41.474085 | orchestrator | } 2026-03-09 01:16:41.474093 | orchestrator | 2026-03-09 01:16:41.474100 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-09 01:16:41.474108 | orchestrator | Monday 09 March 2026 01:09:55 +0000 (0:00:00.984) 0:04:58.496 ********** 2026-03-09 01:16:41.474122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:16:41.474146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:16:41.474198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:16:41.474214 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.474226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:16:41.474240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:16:41.474272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:16:41.474323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:16:41.474341 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.474355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-09 01:16:41.474369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:16:41.474383 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.474430 | orchestrator | 2026-03-09 01:16:41.474446 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-09 01:16:41.474461 | orchestrator | Monday 09 March 2026 01:09:57 +0000 (0:00:01.820) 0:05:00.317 ********** 2026-03-09 01:16:41.474475 | orchestrator | 2026-03-09 01:16:41.474489 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-09 01:16:41.474502 | orchestrator | Monday 09 March 2026 01:09:57 +0000 (0:00:00.103) 0:05:00.420 ********** 2026-03-09 01:16:41.474515 | orchestrator | 2026-03-09 01:16:41.474530 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-09 01:16:41.474543 | orchestrator | Monday 09 March 2026 01:09:57 +0000 (0:00:00.204) 0:05:00.625 ********** 2026-03-09 01:16:41.474558 | orchestrator | 2026-03-09 01:16:41.474571 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-03-09 01:16:41.474584 | orchestrator | Monday 09 March 2026 01:09:58 +0000 (0:00:00.319) 0:05:00.944 ********** 2026-03-09 01:16:41.474596 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:16:41.474609 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:16:41.474622 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:16:41.474642 | orchestrator | 2026-03-09 01:16:41.474655 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-03-09 01:16:41.474667 | orchestrator | Monday 09 March 2026 01:10:20 +0000 (0:00:22.596) 0:05:23.541 ********** 2026-03-09 01:16:41.474679 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:16:41.474692 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:16:41.474704 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:16:41.474718 | orchestrator | 2026-03-09 01:16:41.474731 | orchestrator | RUNNING HANDLER [nova : Restart nova-metadata container] *********************** 2026-03-09 01:16:41.474743 | orchestrator | Monday 09 March 2026 01:10:28 +0000 (0:00:08.144) 0:05:31.685 ********** 2026-03-09 01:16:41.474756 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:16:41.474769 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:16:41.474782 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:16:41.474795 | orchestrator | 2026-03-09 01:16:41.474808 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-03-09 01:16:41.474822 | orchestrator | 2026-03-09 01:16:41.474834 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-09 01:16:41.474848 | orchestrator | Monday 09 March 2026 01:10:34 +0000 (0:00:06.036) 0:05:37.721 ********** 2026-03-09 01:16:41.474861 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:16:41.474877 | orchestrator | 2026-03-09 01:16:41.474891 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-09 01:16:41.474904 | orchestrator | Monday 09 March 2026 01:10:36 +0000 (0:00:01.301) 0:05:39.022 ********** 2026-03-09 01:16:41.474917 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:41.474931 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:41.474944 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:41.474957 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.474970 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.474983 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.474998 | orchestrator | 2026-03-09 01:16:41.475012 | orchestrator | TASK [nova-cell : Get new Libvirt version] ************************************* 2026-03-09 01:16:41.475093 | orchestrator | Monday 09 March 2026 01:10:36 +0000 (0:00:00.625) 0:05:39.648 ********** 2026-03-09 01:16:41.475110 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:16:41.475123 | orchestrator | 2026-03-09 01:16:41.475135 | orchestrator | TASK [nova-cell : Cache new Libvirt version] *********************************** 2026-03-09 01:16:41.475147 | orchestrator | Monday 09 March 2026 01:11:04 +0000 (0:00:27.155) 0:06:06.803 ********** 2026-03-09 01:16:41.475159 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:16:41.475171 | orchestrator | 2026-03-09 01:16:41.475183 | orchestrator | TASK [Get nova_libvirt image info] ********************************************* 2026-03-09 01:16:41.475195 | orchestrator | Monday 09 March 2026 01:11:06 +0000 (0:00:01.964) 0:06:08.768 ********** 2026-03-09 01:16:41.475221 | orchestrator | included: service-image-info for testbed-node-3 2026-03-09 01:16:41.475234 | orchestrator | 2026-03-09 01:16:41.475247 | orchestrator | TASK [service-image-info : community.docker.docker_image_info] ***************** 2026-03-09 01:16:41.475259 | orchestrator | Monday 09 March 2026 01:11:07 +0000 (0:00:01.007) 0:06:09.775 ********** 2026-03-09 01:16:41.475270 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:16:41.475282 | orchestrator | 2026-03-09 01:16:41.475295 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-03-09 01:16:41.475308 | orchestrator | Monday 09 March 2026 01:11:10 +0000 (0:00:03.542) 0:06:13.318 ********** 2026-03-09 01:16:41.475320 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:16:41.475333 | orchestrator | 2026-03-09 01:16:41.475346 | orchestrator | TASK [service-image-info : containers.podman.podman_image_info] **************** 2026-03-09 01:16:41.475360 | orchestrator | Monday 09 March 2026 01:11:12 +0000 (0:00:02.354) 0:06:15.673 ********** 2026-03-09 01:16:41.475372 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:41.475384 | orchestrator | 2026-03-09 01:16:41.475397 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-03-09 01:16:41.475441 | orchestrator | Monday 09 March 2026 01:11:15 +0000 (0:00:02.611) 0:06:18.284 ********** 2026-03-09 01:16:41.475454 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:41.475467 | orchestrator | 2026-03-09 01:16:41.475480 | orchestrator | TASK [nova-cell : Get container facts] ***************************************** 2026-03-09 01:16:41.475493 | orchestrator | Monday 09 March 2026 01:11:17 +0000 (0:00:01.935) 0:06:20.220 ********** 2026-03-09 01:16:41.475506 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-09 01:16:41.475521 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-09 01:16:41.475534 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-09 01:16:41.475547 | orchestrator | 2026-03-09 01:16:41.475560 | orchestrator | TASK [nova-cell : Get current Libvirt version] ********************************* 2026-03-09 01:16:41.475573 | orchestrator | Monday 09 March 2026 01:11:30 +0000 (0:00:12.875) 0:06:33.095 ********** 2026-03-09 01:16:41.475587 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 01:16:41.475602 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 01:16:41.475616 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 01:16:41.475630 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:41.475643 | orchestrator | 2026-03-09 01:16:41.475655 | orchestrator | TASK [nova-cell : Check that the new Libvirt version is >= current] ************ 2026-03-09 01:16:41.475667 | orchestrator | Monday 09 March 2026 01:11:36 +0000 (0:00:05.981) 0:06:39.077 ********** 2026-03-09 01:16:41.475680 | orchestrator | skipping: [testbed-node-3] => (item={'result': False, 'changed': False, 'containers': {}, 'invocation': {'module_args': {'action': 'get_containers', 'container_engine': 'docker', 'name': ['nova_libvirt'], 'api_version': 'auto'}}, 'failed': False, 'item': 'testbed-node-3', 'ansible_loop_var': 'item'})  2026-03-09 01:16:41.475706 | orchestrator | skipping: [testbed-node-3] => (item={'result': False, 'changed': False, 'containers': {}, 'invocation': {'module_args': {'action': 'get_containers', 'container_engine': 'docker', 'name': ['nova_libvirt'], 'api_version': 'auto'}}, 'failed': False, 'item': 'testbed-node-4', 'ansible_loop_var': 'item'})  2026-03-09 01:16:41.475719 | orchestrator | skipping: [testbed-node-3] => (item={'result': False, 'changed': False, 'containers': {}, 'invocation': {'module_args': {'action': 'get_containers', 'container_engine': 'docker', 'name': ['nova_libvirt'], 'api_version': 'auto'}}, 'failed': False, 'item': 'testbed-node-5', 'ansible_loop_var': 'item'})  2026-03-09 01:16:41.475731 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:41.475742 | orchestrator | 2026-03-09 01:16:41.475754 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-03-09 01:16:41.475807 | orchestrator | Monday 09 March 2026 01:11:40 +0000 (0:00:03.696) 0:06:42.774 ********** 2026-03-09 01:16:41.475822 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.475834 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.475847 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.475859 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:16:41.475872 | orchestrator | 2026-03-09 01:16:41.475884 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-09 01:16:41.475896 | orchestrator | Monday 09 March 2026 01:11:41 +0000 (0:00:01.384) 0:06:44.158 ********** 2026-03-09 01:16:41.475909 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-03-09 01:16:41.475923 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-03-09 01:16:41.475936 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-03-09 01:16:41.475947 | orchestrator | 2026-03-09 01:16:41.475960 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-09 01:16:41.475974 | orchestrator | Monday 09 March 2026 01:11:42 +0000 (0:00:01.170) 0:06:45.329 ********** 2026-03-09 01:16:41.476059 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-03-09 01:16:41.476077 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-03-09 01:16:41.476088 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-03-09 01:16:41.476102 | orchestrator | 2026-03-09 01:16:41.476116 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-09 01:16:41.476130 | orchestrator | Monday 09 March 2026 01:11:43 +0000 (0:00:01.315) 0:06:46.645 ********** 2026-03-09 01:16:41.476143 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-03-09 01:16:41.476156 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:41.476169 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-03-09 01:16:41.476182 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:41.476195 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-03-09 01:16:41.476208 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:41.476221 | orchestrator | 2026-03-09 01:16:41.476233 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-03-09 01:16:41.476246 | orchestrator | Monday 09 March 2026 01:11:45 +0000 (0:00:01.084) 0:06:47.730 ********** 2026-03-09 01:16:41.476260 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-09 01:16:41.476274 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-09 01:16:41.476288 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.476302 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-09 01:16:41.476314 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-09 01:16:41.476326 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-09 01:16:41.476340 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.476353 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-09 01:16:41.476365 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-09 01:16:41.476378 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-09 01:16:41.476392 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.476476 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-09 01:16:41.476491 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-09 01:16:41.476503 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-09 01:16:41.476516 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-09 01:16:41.476529 | orchestrator | 2026-03-09 01:16:41.476542 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-03-09 01:16:41.476571 | orchestrator | Monday 09 March 2026 01:11:47 +0000 (0:00:02.144) 0:06:49.874 ********** 2026-03-09 01:16:41.476584 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.476598 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.476611 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.476624 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:16:41.476636 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:16:41.476645 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:16:41.476652 | orchestrator | 2026-03-09 01:16:41.476660 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-03-09 01:16:41.476668 | orchestrator | Monday 09 March 2026 01:11:48 +0000 (0:00:01.287) 0:06:51.162 ********** 2026-03-09 01:16:41.476676 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.476684 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.476691 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.476699 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:16:41.476716 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:16:41.476724 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:16:41.476732 | orchestrator | 2026-03-09 01:16:41.476740 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-09 01:16:41.476747 | orchestrator | Monday 09 March 2026 01:11:50 +0000 (0:00:01.761) 0:06:52.923 ********** 2026-03-09 01:16:41.476758 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-09 01:16:41.476816 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-09 01:16:41.476827 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-09 01:16:41.476836 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-09 01:16:41.476852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:16:41.476867 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.476876 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-09 01:16:41.476905 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-09 01:16:41.476913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.476920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:16:41.476937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:16:41.476948 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.476955 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.476984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.476992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.476999 | orchestrator | 2026-03-09 01:16:41.477006 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-09 01:16:41.477018 | orchestrator | Monday 09 March 2026 01:11:53 +0000 (0:00:03.035) 0:06:55.958 ********** 2026-03-09 01:16:41.477026 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:16:41.477033 | orchestrator | 2026-03-09 01:16:41.477039 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-09 01:16:41.477046 | orchestrator | Monday 09 March 2026 01:11:54 +0000 (0:00:01.533) 0:06:57.492 ********** 2026-03-09 01:16:41.477053 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-09 01:16:41.477064 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-09 01:16:41.477072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:16:41.477103 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-09 01:16:41.477115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:16:41.477139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:16:41.477154 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-09 01:16:41.477169 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-09 01:16:41.477180 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-09 01:16:41.477190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.477233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.477254 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.477266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.477283 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.477294 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.477306 | orchestrator | 2026-03-09 01:16:41.477316 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-09 01:16:41.477328 | orchestrator | Monday 09 March 2026 01:11:59 +0000 (0:00:05.152) 0:07:02.645 ********** 2026-03-09 01:16:41.477362 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-09 01:16:41.477377 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-09 01:16:41.477384 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-09 01:16:41.477392 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-09 01:16:41.477427 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-09 01:16:41.477436 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:41.477444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-09 01:16:41.477474 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-09 01:16:41.477487 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:41.477495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:16:41.477501 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.477509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-09 01:16:41.477516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:16:41.477523 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.477530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-09 01:16:41.477568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:16:41.477576 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.477604 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-09 01:16:41.477617 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-09 01:16:41.477625 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-09 01:16:41.477632 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:41.477639 | orchestrator | 2026-03-09 01:16:41.477645 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-09 01:16:41.477652 | orchestrator | Monday 09 March 2026 01:12:03 +0000 (0:00:04.019) 0:07:06.664 ********** 2026-03-09 01:16:41.477664 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-09 01:16:41.477671 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-09 01:16:41.477698 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-09 01:16:41.477712 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-09 01:16:41.477719 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:41.477726 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-09 01:16:41.477734 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-09 01:16:41.477741 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:41.477754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-09 01:16:41.477762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:16:41.477774 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.477799 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-09 01:16:41.477807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-09 01:16:41.477814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-09 01:16:41.477821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:16:41.477828 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.477839 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-09 01:16:41.477846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:16:41.477858 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.477882 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-09 01:16:41.477890 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:41.477897 | orchestrator | 2026-03-09 01:16:41.477904 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-09 01:16:41.477911 | orchestrator | Monday 09 March 2026 01:12:08 +0000 (0:00:04.190) 0:07:10.854 ********** 2026-03-09 01:16:41.477918 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.477924 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.477931 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.477938 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:16:41.477945 | orchestrator | 2026-03-09 01:16:41.477951 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-03-09 01:16:41.477958 | orchestrator | Monday 09 March 2026 01:12:09 +0000 (0:00:00.945) 0:07:11.800 ********** 2026-03-09 01:16:41.477965 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-09 01:16:41.477972 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-09 01:16:41.477978 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-09 01:16:41.477985 | orchestrator | 2026-03-09 01:16:41.477992 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-03-09 01:16:41.477998 | orchestrator | Monday 09 March 2026 01:12:10 +0000 (0:00:01.371) 0:07:13.172 ********** 2026-03-09 01:16:41.478005 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-09 01:16:41.478012 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-09 01:16:41.478066 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-09 01:16:41.478074 | orchestrator | 2026-03-09 01:16:41.478081 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-03-09 01:16:41.478087 | orchestrator | Monday 09 March 2026 01:12:11 +0000 (0:00:01.063) 0:07:14.235 ********** 2026-03-09 01:16:41.478094 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:16:41.478101 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:16:41.478108 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:16:41.478115 | orchestrator | 2026-03-09 01:16:41.478121 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-03-09 01:16:41.478128 | orchestrator | Monday 09 March 2026 01:12:12 +0000 (0:00:00.552) 0:07:14.788 ********** 2026-03-09 01:16:41.478135 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:16:41.478141 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:16:41.478148 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:16:41.478155 | orchestrator | 2026-03-09 01:16:41.478161 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-03-09 01:16:41.478168 | orchestrator | Monday 09 March 2026 01:12:12 +0000 (0:00:00.634) 0:07:15.423 ********** 2026-03-09 01:16:41.478175 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-09 01:16:41.478187 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-09 01:16:41.478194 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-09 01:16:41.478204 | orchestrator | 2026-03-09 01:16:41.478215 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-03-09 01:16:41.478225 | orchestrator | Monday 09 March 2026 01:12:14 +0000 (0:00:01.576) 0:07:17.000 ********** 2026-03-09 01:16:41.478243 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-09 01:16:41.478256 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-09 01:16:41.478267 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-09 01:16:41.478278 | orchestrator | 2026-03-09 01:16:41.478289 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-03-09 01:16:41.478307 | orchestrator | Monday 09 March 2026 01:12:15 +0000 (0:00:01.305) 0:07:18.305 ********** 2026-03-09 01:16:41.478318 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-09 01:16:41.478328 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-09 01:16:41.478339 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-09 01:16:41.478350 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-03-09 01:16:41.478361 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-03-09 01:16:41.478373 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-03-09 01:16:41.478384 | orchestrator | 2026-03-09 01:16:41.478395 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-03-09 01:16:41.478423 | orchestrator | Monday 09 March 2026 01:12:19 +0000 (0:00:03.990) 0:07:22.295 ********** 2026-03-09 01:16:41.478433 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:41.478442 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:41.478452 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:41.478463 | orchestrator | 2026-03-09 01:16:41.478473 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-03-09 01:16:41.478485 | orchestrator | Monday 09 March 2026 01:12:19 +0000 (0:00:00.323) 0:07:22.619 ********** 2026-03-09 01:16:41.478496 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:41.478507 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:41.478518 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:41.478528 | orchestrator | 2026-03-09 01:16:41.478539 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-03-09 01:16:41.478551 | orchestrator | Monday 09 March 2026 01:12:20 +0000 (0:00:00.635) 0:07:23.255 ********** 2026-03-09 01:16:41.478560 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:16:41.478567 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:16:41.478574 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:16:41.478580 | orchestrator | 2026-03-09 01:16:41.478587 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-03-09 01:16:41.478594 | orchestrator | Monday 09 March 2026 01:12:21 +0000 (0:00:01.354) 0:07:24.610 ********** 2026-03-09 01:16:41.478633 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-03-09 01:16:41.478642 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-03-09 01:16:41.478649 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-03-09 01:16:41.478658 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-03-09 01:16:41.478665 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-03-09 01:16:41.478679 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-03-09 01:16:41.478686 | orchestrator | 2026-03-09 01:16:41.478693 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-03-09 01:16:41.478700 | orchestrator | Monday 09 March 2026 01:12:25 +0000 (0:00:03.648) 0:07:28.258 ********** 2026-03-09 01:16:41.478707 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-09 01:16:41.478713 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-09 01:16:41.478720 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-09 01:16:41.478726 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-09 01:16:41.478733 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:16:41.478740 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-09 01:16:41.478746 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:16:41.478753 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-09 01:16:41.478760 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:16:41.478766 | orchestrator | 2026-03-09 01:16:41.478773 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-03-09 01:16:41.478780 | orchestrator | Monday 09 March 2026 01:12:28 +0000 (0:00:03.271) 0:07:31.530 ********** 2026-03-09 01:16:41.478786 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.478793 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.478800 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.478806 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:16:41.478813 | orchestrator | 2026-03-09 01:16:41.478820 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-03-09 01:16:41.478826 | orchestrator | Monday 09 March 2026 01:12:30 +0000 (0:00:02.063) 0:07:33.593 ********** 2026-03-09 01:16:41.478833 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-09 01:16:41.478840 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-09 01:16:41.478846 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-09 01:16:41.478853 | orchestrator | 2026-03-09 01:16:41.478860 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-03-09 01:16:41.478866 | orchestrator | Monday 09 March 2026 01:12:32 +0000 (0:00:01.151) 0:07:34.746 ********** 2026-03-09 01:16:41.478878 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:41.478884 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:41.478891 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:41.478898 | orchestrator | 2026-03-09 01:16:41.478904 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-03-09 01:16:41.478911 | orchestrator | Monday 09 March 2026 01:12:32 +0000 (0:00:00.558) 0:07:35.304 ********** 2026-03-09 01:16:41.478918 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:41.478924 | orchestrator | 2026-03-09 01:16:41.478931 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-03-09 01:16:41.478938 | orchestrator | Monday 09 March 2026 01:12:32 +0000 (0:00:00.133) 0:07:35.438 ********** 2026-03-09 01:16:41.478944 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:41.478951 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:41.478958 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:41.478964 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.478971 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.478977 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.478984 | orchestrator | 2026-03-09 01:16:41.478990 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-03-09 01:16:41.478997 | orchestrator | Monday 09 March 2026 01:12:33 +0000 (0:00:00.609) 0:07:36.048 ********** 2026-03-09 01:16:41.479004 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-09 01:16:41.479015 | orchestrator | 2026-03-09 01:16:41.479022 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-03-09 01:16:41.479028 | orchestrator | Monday 09 March 2026 01:12:34 +0000 (0:00:00.801) 0:07:36.849 ********** 2026-03-09 01:16:41.479035 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:41.479042 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:41.479048 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:41.479055 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.479061 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.479068 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.479074 | orchestrator | 2026-03-09 01:16:41.479081 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-03-09 01:16:41.479088 | orchestrator | Monday 09 March 2026 01:12:35 +0000 (0:00:00.901) 0:07:37.750 ********** 2026-03-09 01:16:41.479115 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-09 01:16:41.479124 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-09 01:16:41.479132 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-09 01:16:41.479143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:16:41.479156 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-09 01:16:41.479183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:16:41.479191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:16:41.479199 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-09 01:16:41.479206 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-09 01:16:41.479216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.479224 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.479237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.479263 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.479271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.479278 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.479285 | orchestrator | 2026-03-09 01:16:41.479292 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-03-09 01:16:41.479300 | orchestrator | Monday 09 March 2026 01:12:39 +0000 (0:00:04.218) 0:07:41.969 ********** 2026-03-09 01:16:41.479317 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-09 01:16:41.479338 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-09 01:16:41.479381 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-09 01:16:41.479394 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-09 01:16:41.479459 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-09 01:16:41.479471 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-09 01:16:41.479501 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.479515 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.479529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:16:41.479536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:16:41.479543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:16:41.479551 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.479566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.479573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.479588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.479595 | orchestrator | 2026-03-09 01:16:41.479603 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-03-09 01:16:41.479609 | orchestrator | Monday 09 March 2026 01:12:48 +0000 (0:00:08.933) 0:07:50.903 ********** 2026-03-09 01:16:41.479616 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:41.479623 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:41.479630 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:41.479636 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.479643 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.479649 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.479656 | orchestrator | 2026-03-09 01:16:41.479663 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-03-09 01:16:41.479669 | orchestrator | Monday 09 March 2026 01:12:50 +0000 (0:00:02.250) 0:07:53.154 ********** 2026-03-09 01:16:41.479676 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-09 01:16:41.479683 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-09 01:16:41.479690 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-09 01:16:41.479696 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-09 01:16:41.479703 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-09 01:16:41.479710 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-09 01:16:41.479716 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-09 01:16:41.479724 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.479731 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-09 01:16:41.479742 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.479748 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-09 01:16:41.479755 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.479762 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-09 01:16:41.479768 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-09 01:16:41.479775 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-09 01:16:41.479782 | orchestrator | 2026-03-09 01:16:41.479789 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-03-09 01:16:41.479795 | orchestrator | Monday 09 March 2026 01:12:56 +0000 (0:00:06.027) 0:07:59.181 ********** 2026-03-09 01:16:41.479802 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:41.479809 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:41.479815 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:41.479822 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.479828 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.479835 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.479842 | orchestrator | 2026-03-09 01:16:41.479848 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-03-09 01:16:41.479859 | orchestrator | Monday 09 March 2026 01:12:57 +0000 (0:00:00.804) 0:07:59.986 ********** 2026-03-09 01:16:41.479866 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-09 01:16:41.479873 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-09 01:16:41.479879 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-09 01:16:41.479886 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-09 01:16:41.479893 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-09 01:16:41.479899 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-09 01:16:41.479906 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-09 01:16:41.479913 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-09 01:16:41.479919 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-09 01:16:41.479926 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-09 01:16:41.479933 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.479939 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-09 01:16:41.479946 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.479957 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-09 01:16:41.479964 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-09 01:16:41.479971 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.479977 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-09 01:16:41.479984 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-09 01:16:41.479990 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-09 01:16:41.480001 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-09 01:16:41.480007 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-09 01:16:41.480013 | orchestrator | 2026-03-09 01:16:41.480019 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-03-09 01:16:41.480026 | orchestrator | Monday 09 March 2026 01:13:04 +0000 (0:00:07.398) 0:08:07.384 ********** 2026-03-09 01:16:41.480032 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-09 01:16:41.480038 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-09 01:16:41.480045 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-09 01:16:41.480051 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-09 01:16:41.480057 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-09 01:16:41.480063 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-09 01:16:41.480069 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-09 01:16:41.480075 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-09 01:16:41.480081 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-09 01:16:41.480088 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-09 01:16:41.480094 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-09 01:16:41.480100 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-09 01:16:41.480106 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-09 01:16:41.480112 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.480118 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-09 01:16:41.480125 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.480131 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-09 01:16:41.480137 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.480143 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-09 01:16:41.480152 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-09 01:16:41.480159 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-09 01:16:41.480165 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-09 01:16:41.480171 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-09 01:16:41.480178 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-09 01:16:41.480184 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-09 01:16:41.480190 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-09 01:16:41.480196 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-09 01:16:41.480202 | orchestrator | 2026-03-09 01:16:41.480209 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-03-09 01:16:41.480215 | orchestrator | Monday 09 March 2026 01:13:12 +0000 (0:00:07.838) 0:08:15.223 ********** 2026-03-09 01:16:41.480221 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:41.480227 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:41.480237 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:41.480243 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.480250 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.480256 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.480262 | orchestrator | 2026-03-09 01:16:41.480268 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-03-09 01:16:41.480274 | orchestrator | Monday 09 March 2026 01:13:13 +0000 (0:00:00.852) 0:08:16.076 ********** 2026-03-09 01:16:41.480280 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:41.480287 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:41.480293 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:41.480299 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.480305 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.480311 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.480317 | orchestrator | 2026-03-09 01:16:41.480327 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-03-09 01:16:41.480334 | orchestrator | Monday 09 March 2026 01:13:14 +0000 (0:00:00.649) 0:08:16.725 ********** 2026-03-09 01:16:41.480340 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.480346 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.480352 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.480359 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:16:41.480365 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:16:41.480371 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:16:41.480377 | orchestrator | 2026-03-09 01:16:41.480383 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-03-09 01:16:41.480389 | orchestrator | Monday 09 March 2026 01:13:16 +0000 (0:00:02.332) 0:08:19.057 ********** 2026-03-09 01:16:41.480396 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-09 01:16:41.480417 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-09 01:16:41.480427 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-09 01:16:41.480438 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:41.480445 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-09 01:16:41.480455 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-09 01:16:41.480462 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-09 01:16:41.480469 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:41.480475 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-09 01:16:41.480482 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-09 01:16:41.480491 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-09 01:16:41.480502 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:41.480509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-09 01:16:41.480520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:16:41.480527 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.480533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-09 01:16:41.480540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:16:41.480547 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.480553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-09 01:16:41.480567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:16:41.480573 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.480580 | orchestrator | 2026-03-09 01:16:41.480586 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-03-09 01:16:41.480593 | orchestrator | Monday 09 March 2026 01:13:17 +0000 (0:00:01.571) 0:08:20.629 ********** 2026-03-09 01:16:41.480599 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-09 01:16:41.480605 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-09 01:16:41.480612 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:41.480618 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-09 01:16:41.480624 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-09 01:16:41.480630 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:41.480637 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-09 01:16:41.480643 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-09 01:16:41.480649 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:41.480655 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-09 01:16:41.480662 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-09 01:16:41.480668 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.480674 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-09 01:16:41.480680 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-09 01:16:41.480687 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.480693 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-09 01:16:41.480699 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-09 01:16:41.480705 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.480712 | orchestrator | 2026-03-09 01:16:41.480721 | orchestrator | TASK [service-check-containers : nova_cell | Check containers] ***************** 2026-03-09 01:16:41.480728 | orchestrator | Monday 09 March 2026 01:13:18 +0000 (0:00:00.972) 0:08:21.602 ********** 2026-03-09 01:16:41.480734 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-09 01:16:41.480741 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-09 01:16:41.480758 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-09 01:16:41.480764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:16:41.480771 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-09 01:16:41.480782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:16:41.480789 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-09 01:16:41.480796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:16:41.480807 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-09 01:16:41.480817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.480824 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.480835 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.480842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.480848 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.480859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:41.480866 | orchestrator | 2026-03-09 01:16:41.480872 | orchestrator | TASK [service-check-containers : nova_cell | Notify handlers to restart containers] *** 2026-03-09 01:16:41.480878 | orchestrator | Monday 09 March 2026 01:13:21 +0000 (0:00:03.115) 0:08:24.717 ********** 2026-03-09 01:16:41.480885 | orchestrator | changed: [testbed-node-3] => { 2026-03-09 01:16:41.480891 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:16:41.480897 | orchestrator | } 2026-03-09 01:16:41.480904 | orchestrator | changed: [testbed-node-4] => { 2026-03-09 01:16:41.480910 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:16:41.480916 | orchestrator | } 2026-03-09 01:16:41.480926 | orchestrator | changed: [testbed-node-5] => { 2026-03-09 01:16:41.480932 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:16:41.480938 | orchestrator | } 2026-03-09 01:16:41.480944 | orchestrator | changed: [testbed-node-0] => { 2026-03-09 01:16:41.480951 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:16:41.480957 | orchestrator | } 2026-03-09 01:16:41.480963 | orchestrator | changed: [testbed-node-1] => { 2026-03-09 01:16:41.480969 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:16:41.480975 | orchestrator | } 2026-03-09 01:16:41.480981 | orchestrator | changed: [testbed-node-2] => { 2026-03-09 01:16:41.480988 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:16:41.480994 | orchestrator | } 2026-03-09 01:16:41.481000 | orchestrator | 2026-03-09 01:16:41.481006 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-09 01:16:41.481013 | orchestrator | Monday 09 March 2026 01:13:23 +0000 (0:00:01.159) 0:08:25.877 ********** 2026-03-09 01:16:41.481019 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-09 01:16:41.481030 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-09 01:16:41.481041 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-09 01:16:41.481048 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:41.481054 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-09 01:16:41.481066 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-09 01:16:41.481073 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-09 01:16:41.481085 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-09 01:16:41.481096 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-09 01:16:41.481103 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-09 01:16:41.481109 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:41.481116 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:41.481122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-09 01:16:41.481132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:16:41.481139 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.481145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-09 01:16:41.481155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:16:41.481166 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.481173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-09 01:16:41.481179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:16:41.481186 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.481192 | orchestrator | 2026-03-09 01:16:41.481198 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-09 01:16:41.481205 | orchestrator | Monday 09 March 2026 01:13:25 +0000 (0:00:02.426) 0:08:28.303 ********** 2026-03-09 01:16:41.481211 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:41.481217 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:41.481223 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:41.481229 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.481235 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.481241 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.481247 | orchestrator | 2026-03-09 01:16:41.481254 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-09 01:16:41.481260 | orchestrator | Monday 09 March 2026 01:13:26 +0000 (0:00:00.744) 0:08:29.048 ********** 2026-03-09 01:16:41.481266 | orchestrator | 2026-03-09 01:16:41.481272 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-09 01:16:41.481278 | orchestrator | Monday 09 March 2026 01:13:26 +0000 (0:00:00.141) 0:08:29.190 ********** 2026-03-09 01:16:41.481284 | orchestrator | 2026-03-09 01:16:41.481291 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-09 01:16:41.481297 | orchestrator | Monday 09 March 2026 01:13:26 +0000 (0:00:00.133) 0:08:29.324 ********** 2026-03-09 01:16:41.481303 | orchestrator | 2026-03-09 01:16:41.481309 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-09 01:16:41.481315 | orchestrator | Monday 09 March 2026 01:13:26 +0000 (0:00:00.343) 0:08:29.667 ********** 2026-03-09 01:16:41.481321 | orchestrator | 2026-03-09 01:16:41.481328 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-09 01:16:41.481337 | orchestrator | Monday 09 March 2026 01:13:27 +0000 (0:00:00.138) 0:08:29.806 ********** 2026-03-09 01:16:41.481343 | orchestrator | 2026-03-09 01:16:41.481350 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-09 01:16:41.481356 | orchestrator | Monday 09 March 2026 01:13:27 +0000 (0:00:00.144) 0:08:29.950 ********** 2026-03-09 01:16:41.481362 | orchestrator | 2026-03-09 01:16:41.481368 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-03-09 01:16:41.481374 | orchestrator | Monday 09 March 2026 01:13:27 +0000 (0:00:00.181) 0:08:30.131 ********** 2026-03-09 01:16:41.481389 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:16:41.481395 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:16:41.481416 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:16:41.481423 | orchestrator | 2026-03-09 01:16:41.481429 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-03-09 01:16:41.481435 | orchestrator | Monday 09 March 2026 01:13:39 +0000 (0:00:12.545) 0:08:42.677 ********** 2026-03-09 01:16:41.481441 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:16:41.481448 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:16:41.481454 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:16:41.481460 | orchestrator | 2026-03-09 01:16:41.481466 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-03-09 01:16:41.481472 | orchestrator | Monday 09 March 2026 01:13:56 +0000 (0:00:16.356) 0:08:59.034 ********** 2026-03-09 01:16:41.481479 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:16:41.481485 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:16:41.481491 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:16:41.481497 | orchestrator | 2026-03-09 01:16:41.481503 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-03-09 01:16:41.481509 | orchestrator | Monday 09 March 2026 01:14:19 +0000 (0:00:23.372) 0:09:22.406 ********** 2026-03-09 01:16:41.481516 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:16:41.481522 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:16:41.481528 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:16:41.481534 | orchestrator | 2026-03-09 01:16:41.481540 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-03-09 01:16:41.481546 | orchestrator | Monday 09 March 2026 01:14:50 +0000 (0:00:30.887) 0:09:53.293 ********** 2026-03-09 01:16:41.481556 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:16:41.481562 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:16:41.481569 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:16:41.481575 | orchestrator | 2026-03-09 01:16:41.481581 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-03-09 01:16:41.481587 | orchestrator | Monday 09 March 2026 01:14:51 +0000 (0:00:00.852) 0:09:54.146 ********** 2026-03-09 01:16:41.481594 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:16:41.481600 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:16:41.481606 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:16:41.481612 | orchestrator | 2026-03-09 01:16:41.481618 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-03-09 01:16:41.481624 | orchestrator | Monday 09 March 2026 01:14:52 +0000 (0:00:00.916) 0:09:55.062 ********** 2026-03-09 01:16:41.481631 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:16:41.481637 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:16:41.481643 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:16:41.481649 | orchestrator | 2026-03-09 01:16:41.481655 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-03-09 01:16:41.481661 | orchestrator | Monday 09 March 2026 01:15:19 +0000 (0:00:27.310) 0:10:22.373 ********** 2026-03-09 01:16:41.481667 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:41.481674 | orchestrator | 2026-03-09 01:16:41.481680 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-03-09 01:16:41.481686 | orchestrator | Monday 09 March 2026 01:15:19 +0000 (0:00:00.141) 0:10:22.514 ********** 2026-03-09 01:16:41.481692 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.481698 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.481705 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:41.481711 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:41.481717 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.481723 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-03-09 01:16:41.481730 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-09 01:16:41.481740 | orchestrator | 2026-03-09 01:16:41.481746 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-03-09 01:16:41.481752 | orchestrator | Monday 09 March 2026 01:15:44 +0000 (0:00:24.979) 0:10:47.493 ********** 2026-03-09 01:16:41.481759 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.481765 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.481771 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:41.481777 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.481783 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:41.481789 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:41.481795 | orchestrator | 2026-03-09 01:16:41.481802 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-03-09 01:16:41.481808 | orchestrator | Monday 09 March 2026 01:15:57 +0000 (0:00:12.832) 0:11:00.325 ********** 2026-03-09 01:16:41.481814 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:41.481820 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:41.481826 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.481832 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.481838 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.481845 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2026-03-09 01:16:41.481851 | orchestrator | 2026-03-09 01:16:41.481857 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-09 01:16:41.481863 | orchestrator | Monday 09 March 2026 01:16:02 +0000 (0:00:04.589) 0:11:04.915 ********** 2026-03-09 01:16:41.481869 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-09 01:16:41.481875 | orchestrator | 2026-03-09 01:16:41.481882 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-09 01:16:41.481902 | orchestrator | Monday 09 March 2026 01:16:17 +0000 (0:00:15.138) 0:11:20.053 ********** 2026-03-09 01:16:41.481908 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-09 01:16:41.481914 | orchestrator | 2026-03-09 01:16:41.481920 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-03-09 01:16:41.481927 | orchestrator | Monday 09 March 2026 01:16:19 +0000 (0:00:01.862) 0:11:21.916 ********** 2026-03-09 01:16:41.481933 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:41.481939 | orchestrator | 2026-03-09 01:16:41.481945 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-03-09 01:16:41.481951 | orchestrator | Monday 09 March 2026 01:16:20 +0000 (0:00:01.412) 0:11:23.329 ********** 2026-03-09 01:16:41.481957 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-09 01:16:41.481964 | orchestrator | 2026-03-09 01:16:41.481970 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-03-09 01:16:41.481976 | orchestrator | 2026-03-09 01:16:41.481982 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-03-09 01:16:41.481988 | orchestrator | Monday 09 March 2026 01:16:34 +0000 (0:00:13.524) 0:11:36.854 ********** 2026-03-09 01:16:41.481994 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:16:41.482001 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:16:41.482007 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:16:41.482013 | orchestrator | 2026-03-09 01:16:41.482047 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-03-09 01:16:41.482053 | orchestrator | 2026-03-09 01:16:41.482059 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-03-09 01:16:41.482066 | orchestrator | Monday 09 March 2026 01:16:35 +0000 (0:00:01.239) 0:11:38.093 ********** 2026-03-09 01:16:41.482072 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.482078 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.482084 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.482090 | orchestrator | 2026-03-09 01:16:41.482096 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-03-09 01:16:41.482103 | orchestrator | 2026-03-09 01:16:41.482109 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-03-09 01:16:41.482121 | orchestrator | Monday 09 March 2026 01:16:35 +0000 (0:00:00.563) 0:11:38.656 ********** 2026-03-09 01:16:41.482131 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-03-09 01:16:41.482138 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-09 01:16:41.482144 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-09 01:16:41.482150 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-03-09 01:16:41.482156 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-03-09 01:16:41.482163 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-03-09 01:16:41.482169 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:41.482175 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-03-09 01:16:41.482182 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-09 01:16:41.482188 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-09 01:16:41.482194 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-03-09 01:16:41.482200 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-03-09 01:16:41.482207 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-03-09 01:16:41.482213 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:41.482219 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-03-09 01:16:41.482225 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-09 01:16:41.482232 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-09 01:16:41.482238 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-03-09 01:16:41.482244 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-03-09 01:16:41.482251 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-03-09 01:16:41.482257 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:41.482263 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-03-09 01:16:41.482269 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-09 01:16:41.482276 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-09 01:16:41.482282 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-03-09 01:16:41.482288 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-03-09 01:16:41.482294 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-03-09 01:16:41.482300 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.482307 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-03-09 01:16:41.482313 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-09 01:16:41.482319 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-09 01:16:41.482325 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-03-09 01:16:41.482332 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-03-09 01:16:41.482338 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-03-09 01:16:41.482344 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.482350 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-03-09 01:16:41.482356 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-09 01:16:41.482363 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-09 01:16:41.482369 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-03-09 01:16:41.482375 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-03-09 01:16:41.482381 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-03-09 01:16:41.482387 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.482394 | orchestrator | 2026-03-09 01:16:41.482421 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-03-09 01:16:41.482431 | orchestrator | 2026-03-09 01:16:41.482438 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-03-09 01:16:41.482444 | orchestrator | Monday 09 March 2026 01:16:37 +0000 (0:00:01.525) 0:11:40.182 ********** 2026-03-09 01:16:41.482450 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-03-09 01:16:41.482456 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-03-09 01:16:41.482463 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.482469 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-03-09 01:16:41.482475 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-03-09 01:16:41.482481 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.482487 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-03-09 01:16:41.482493 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-03-09 01:16:41.482500 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.482506 | orchestrator | 2026-03-09 01:16:41.482512 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-03-09 01:16:41.482518 | orchestrator | 2026-03-09 01:16:41.482525 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-03-09 01:16:41.482531 | orchestrator | Monday 09 March 2026 01:16:38 +0000 (0:00:00.818) 0:11:41.001 ********** 2026-03-09 01:16:41.482537 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.482543 | orchestrator | 2026-03-09 01:16:41.482550 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-03-09 01:16:41.482556 | orchestrator | 2026-03-09 01:16:41.482562 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-03-09 01:16:41.482568 | orchestrator | Monday 09 March 2026 01:16:39 +0000 (0:00:00.870) 0:11:41.872 ********** 2026-03-09 01:16:41.482574 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:41.482581 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:41.482587 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:41.482593 | orchestrator | 2026-03-09 01:16:41.482599 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:16:41.482609 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:16:41.482617 | orchestrator | testbed-node-0 : ok=59  changed=39  unreachable=0 failed=0 skipped=49  rescued=0 ignored=0 2026-03-09 01:16:41.482623 | orchestrator | testbed-node-1 : ok=32  changed=23  unreachable=0 failed=0 skipped=56  rescued=0 ignored=0 2026-03-09 01:16:41.482629 | orchestrator | testbed-node-2 : ok=32  changed=23  unreachable=0 failed=0 skipped=56  rescued=0 ignored=0 2026-03-09 01:16:41.482636 | orchestrator | testbed-node-3 : ok=46  changed=29  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2026-03-09 01:16:41.482642 | orchestrator | testbed-node-4 : ok=39  changed=28  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-09 01:16:41.482648 | orchestrator | testbed-node-5 : ok=44  changed=28  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-09 01:16:41.482654 | orchestrator | 2026-03-09 01:16:41.482661 | orchestrator | 2026-03-09 01:16:41.482667 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:16:41.482673 | orchestrator | Monday 09 March 2026 01:16:39 +0000 (0:00:00.466) 0:11:42.338 ********** 2026-03-09 01:16:41.482680 | orchestrator | =============================================================================== 2026-03-09 01:16:41.482686 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 35.97s 2026-03-09 01:16:41.482692 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 30.89s 2026-03-09 01:16:41.482702 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 27.31s 2026-03-09 01:16:41.482708 | orchestrator | nova-cell : Get new Libvirt version ------------------------------------ 27.16s 2026-03-09 01:16:41.482715 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 24.98s 2026-03-09 01:16:41.482721 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 23.58s 2026-03-09 01:16:41.482727 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 23.37s 2026-03-09 01:16:41.482733 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 22.59s 2026-03-09 01:16:41.482740 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 21.34s 2026-03-09 01:16:41.482746 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 19.04s 2026-03-09 01:16:41.482752 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 16.36s 2026-03-09 01:16:41.482758 | orchestrator | nova : Copying over nova.conf ------------------------------------------ 15.40s 2026-03-09 01:16:41.482765 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 15.14s 2026-03-09 01:16:41.482771 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.40s 2026-03-09 01:16:41.482777 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.17s 2026-03-09 01:16:41.482783 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------ 14.00s 2026-03-09 01:16:41.482793 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 13.52s 2026-03-09 01:16:41.482799 | orchestrator | nova-cell : Get container facts ---------------------------------------- 12.88s 2026-03-09 01:16:41.482805 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 12.83s 2026-03-09 01:16:41.482812 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.55s 2026-03-09 01:16:41.482818 | orchestrator | 2026-03-09 01:16:41 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:44.501707 | orchestrator | 2026-03-09 01:16:44 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:16:44.501807 | orchestrator | 2026-03-09 01:16:44 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:47.547157 | orchestrator | 2026-03-09 01:16:47 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:16:47.547256 | orchestrator | 2026-03-09 01:16:47 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:50.596912 | orchestrator | 2026-03-09 01:16:50 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:16:50.597298 | orchestrator | 2026-03-09 01:16:50 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:53.647985 | orchestrator | 2026-03-09 01:16:53 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:16:53.648080 | orchestrator | 2026-03-09 01:16:53 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:56.699852 | orchestrator | 2026-03-09 01:16:56 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:16:56.699930 | orchestrator | 2026-03-09 01:16:56 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:59.743219 | orchestrator | 2026-03-09 01:16:59 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:16:59.743291 | orchestrator | 2026-03-09 01:16:59 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:02.782846 | orchestrator | 2026-03-09 01:17:02 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:17:02.783886 | orchestrator | 2026-03-09 01:17:02 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:05.828225 | orchestrator | 2026-03-09 01:17:05 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:17:05.828359 | orchestrator | 2026-03-09 01:17:05 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:08.877818 | orchestrator | 2026-03-09 01:17:08 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:17:08.877921 | orchestrator | 2026-03-09 01:17:08 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:11.920187 | orchestrator | 2026-03-09 01:17:11 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:17:11.920273 | orchestrator | 2026-03-09 01:17:11 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:14.966785 | orchestrator | 2026-03-09 01:17:14 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:17:14.966911 | orchestrator | 2026-03-09 01:17:14 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:18.022674 | orchestrator | 2026-03-09 01:17:18 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:17:18.022766 | orchestrator | 2026-03-09 01:17:18 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:21.065629 | orchestrator | 2026-03-09 01:17:21 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:17:21.065714 | orchestrator | 2026-03-09 01:17:21 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:24.110355 | orchestrator | 2026-03-09 01:17:24 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:17:24.110539 | orchestrator | 2026-03-09 01:17:24 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:27.157856 | orchestrator | 2026-03-09 01:17:27 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:17:27.157938 | orchestrator | 2026-03-09 01:17:27 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:30.201144 | orchestrator | 2026-03-09 01:17:30 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:17:30.201237 | orchestrator | 2026-03-09 01:17:30 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:33.233167 | orchestrator | 2026-03-09 01:17:33 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:17:33.233272 | orchestrator | 2026-03-09 01:17:33 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:36.267518 | orchestrator | 2026-03-09 01:17:36 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:17:36.267610 | orchestrator | 2026-03-09 01:17:36 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:39.313080 | orchestrator | 2026-03-09 01:17:39 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:17:39.313638 | orchestrator | 2026-03-09 01:17:39 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:42.358689 | orchestrator | 2026-03-09 01:17:42 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:17:42.358793 | orchestrator | 2026-03-09 01:17:42 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:45.398099 | orchestrator | 2026-03-09 01:17:45 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:17:45.398179 | orchestrator | 2026-03-09 01:17:45 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:48.438653 | orchestrator | 2026-03-09 01:17:48 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:17:48.438740 | orchestrator | 2026-03-09 01:17:48 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:51.480527 | orchestrator | 2026-03-09 01:17:51 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:17:51.480661 | orchestrator | 2026-03-09 01:17:51 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:54.516970 | orchestrator | 2026-03-09 01:17:54 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:17:54.517076 | orchestrator | 2026-03-09 01:17:54 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:57.545209 | orchestrator | 2026-03-09 01:17:57 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:17:57.545311 | orchestrator | 2026-03-09 01:17:57 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:00.583586 | orchestrator | 2026-03-09 01:18:00 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:18:00.583659 | orchestrator | 2026-03-09 01:18:00 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:03.623451 | orchestrator | 2026-03-09 01:18:03 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:18:03.623536 | orchestrator | 2026-03-09 01:18:03 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:06.674316 | orchestrator | 2026-03-09 01:18:06 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:18:06.674439 | orchestrator | 2026-03-09 01:18:06 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:09.711867 | orchestrator | 2026-03-09 01:18:09 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:18:09.711926 | orchestrator | 2026-03-09 01:18:09 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:12.754764 | orchestrator | 2026-03-09 01:18:12 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:18:12.754856 | orchestrator | 2026-03-09 01:18:12 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:15.794971 | orchestrator | 2026-03-09 01:18:15 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:18:15.795060 | orchestrator | 2026-03-09 01:18:15 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:18.834491 | orchestrator | 2026-03-09 01:18:18 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:18:18.834588 | orchestrator | 2026-03-09 01:18:18 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:21.882545 | orchestrator | 2026-03-09 01:18:21 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:18:21.882638 | orchestrator | 2026-03-09 01:18:21 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:24.929969 | orchestrator | 2026-03-09 01:18:24 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:18:24.930128 | orchestrator | 2026-03-09 01:18:24 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:27.982622 | orchestrator | 2026-03-09 01:18:27 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:18:27.982815 | orchestrator | 2026-03-09 01:18:27 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:31.042962 | orchestrator | 2026-03-09 01:18:31 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:18:31.043049 | orchestrator | 2026-03-09 01:18:31 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:34.075854 | orchestrator | 2026-03-09 01:18:34 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:18:34.075939 | orchestrator | 2026-03-09 01:18:34 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:37.125308 | orchestrator | 2026-03-09 01:18:37 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:18:37.125514 | orchestrator | 2026-03-09 01:18:37 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:40.170239 | orchestrator | 2026-03-09 01:18:40 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:18:40.170317 | orchestrator | 2026-03-09 01:18:40 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:43.218946 | orchestrator | 2026-03-09 01:18:43 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:18:43.219014 | orchestrator | 2026-03-09 01:18:43 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:46.278710 | orchestrator | 2026-03-09 01:18:46 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:18:46.278794 | orchestrator | 2026-03-09 01:18:46 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:49.319872 | orchestrator | 2026-03-09 01:18:49 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:18:49.319959 | orchestrator | 2026-03-09 01:18:49 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:52.375284 | orchestrator | 2026-03-09 01:18:52 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:18:52.375436 | orchestrator | 2026-03-09 01:18:52 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:55.434619 | orchestrator | 2026-03-09 01:18:55 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:18:55.434714 | orchestrator | 2026-03-09 01:18:55 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:58.476832 | orchestrator | 2026-03-09 01:18:58 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:18:58.476928 | orchestrator | 2026-03-09 01:18:58 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:19:01.527194 | orchestrator | 2026-03-09 01:19:01 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:19:01.527306 | orchestrator | 2026-03-09 01:19:01 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:19:04.569622 | orchestrator | 2026-03-09 01:19:04 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:19:04.569728 | orchestrator | 2026-03-09 01:19:04 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:19:07.612870 | orchestrator | 2026-03-09 01:19:07 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:19:07.612991 | orchestrator | 2026-03-09 01:19:07 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:19:10.656281 | orchestrator | 2026-03-09 01:19:10 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:19:10.656371 | orchestrator | 2026-03-09 01:19:10 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:19:13.709986 | orchestrator | 2026-03-09 01:19:13 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:19:13.710117 | orchestrator | 2026-03-09 01:19:13 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:19:16.758657 | orchestrator | 2026-03-09 01:19:16 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:19:16.758747 | orchestrator | 2026-03-09 01:19:16 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:19:19.801768 | orchestrator | 2026-03-09 01:19:19 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:19:19.801847 | orchestrator | 2026-03-09 01:19:19 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:19:22.859548 | orchestrator | 2026-03-09 01:19:22 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:19:22.859674 | orchestrator | 2026-03-09 01:19:22 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:19:25.907784 | orchestrator | 2026-03-09 01:19:25 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:19:25.907899 | orchestrator | 2026-03-09 01:19:25 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:19:28.961441 | orchestrator | 2026-03-09 01:19:28 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:19:28.961579 | orchestrator | 2026-03-09 01:19:28 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:19:32.018733 | orchestrator | 2026-03-09 01:19:32 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:19:32.018833 | orchestrator | 2026-03-09 01:19:32 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:19:35.058756 | orchestrator | 2026-03-09 01:19:35 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:19:35.058839 | orchestrator | 2026-03-09 01:19:35 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:19:38.107919 | orchestrator | 2026-03-09 01:19:38 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:19:38.108008 | orchestrator | 2026-03-09 01:19:38 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:19:41.151780 | orchestrator | 2026-03-09 01:19:41 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:19:41.151896 | orchestrator | 2026-03-09 01:19:41 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:19:44.197531 | orchestrator | 2026-03-09 01:19:44 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:19:44.197634 | orchestrator | 2026-03-09 01:19:44 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:19:47.237350 | orchestrator | 2026-03-09 01:19:47 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:19:47.237464 | orchestrator | 2026-03-09 01:19:47 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:19:50.291365 | orchestrator | 2026-03-09 01:19:50 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:19:50.291492 | orchestrator | 2026-03-09 01:19:50 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:19:53.337553 | orchestrator | 2026-03-09 01:19:53 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:19:53.337677 | orchestrator | 2026-03-09 01:19:53 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:19:56.384854 | orchestrator | 2026-03-09 01:19:56 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:19:56.384941 | orchestrator | 2026-03-09 01:19:56 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:19:59.429118 | orchestrator | 2026-03-09 01:19:59 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:19:59.429224 | orchestrator | 2026-03-09 01:19:59 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:20:02.475932 | orchestrator | 2026-03-09 01:20:02 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:20:02.476059 | orchestrator | 2026-03-09 01:20:02 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:20:05.516420 | orchestrator | 2026-03-09 01:20:05 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:20:05.516503 | orchestrator | 2026-03-09 01:20:05 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:20:08.565902 | orchestrator | 2026-03-09 01:20:08 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:20:08.566095 | orchestrator | 2026-03-09 01:20:08 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:20:11.613711 | orchestrator | 2026-03-09 01:20:11 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:20:11.613836 | orchestrator | 2026-03-09 01:20:11 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:20:14.659518 | orchestrator | 2026-03-09 01:20:14 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:20:14.659632 | orchestrator | 2026-03-09 01:20:14 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:20:17.704156 | orchestrator | 2026-03-09 01:20:17 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:20:17.704263 | orchestrator | 2026-03-09 01:20:17 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:20:20.750573 | orchestrator | 2026-03-09 01:20:20 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:20:20.750652 | orchestrator | 2026-03-09 01:20:20 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:20:23.799821 | orchestrator | 2026-03-09 01:20:23 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:20:23.799934 | orchestrator | 2026-03-09 01:20:23 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:20:26.842309 | orchestrator | 2026-03-09 01:20:26 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:20:26.842383 | orchestrator | 2026-03-09 01:20:26 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:20:29.887175 | orchestrator | 2026-03-09 01:20:29 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state STARTED 2026-03-09 01:20:29.887256 | orchestrator | 2026-03-09 01:20:29 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:20:32.936817 | orchestrator | 2026-03-09 01:20:32 | INFO  | Task f3e6510a-2845-4d22-a88f-421b6f8dfc16 is in state SUCCESS 2026-03-09 01:20:32.938769 | orchestrator | 2026-03-09 01:20:32.938807 | orchestrator | 2026-03-09 01:20:32.938816 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:20:32.938824 | orchestrator | 2026-03-09 01:20:32.938831 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:20:32.938839 | orchestrator | Monday 09 March 2026 01:15:11 +0000 (0:00:00.296) 0:00:00.296 ********** 2026-03-09 01:20:32.938850 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:20:32.938862 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:20:32.938872 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:20:32.938883 | orchestrator | 2026-03-09 01:20:32.938893 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:20:32.938903 | orchestrator | Monday 09 March 2026 01:15:11 +0000 (0:00:00.357) 0:00:00.654 ********** 2026-03-09 01:20:32.938912 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-03-09 01:20:32.938924 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-03-09 01:20:32.938934 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-03-09 01:20:32.938943 | orchestrator | 2026-03-09 01:20:32.938953 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-03-09 01:20:32.938962 | orchestrator | 2026-03-09 01:20:32.938971 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-09 01:20:32.938980 | orchestrator | Monday 09 March 2026 01:15:12 +0000 (0:00:00.509) 0:00:01.163 ********** 2026-03-09 01:20:32.938991 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:20:32.939002 | orchestrator | 2026-03-09 01:20:32.939013 | orchestrator | TASK [service-ks-register : octavia | Creating/deleting services] ************** 2026-03-09 01:20:32.939048 | orchestrator | Monday 09 March 2026 01:15:12 +0000 (0:00:00.638) 0:00:01.801 ********** 2026-03-09 01:20:32.939059 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-03-09 01:20:32.939068 | orchestrator | 2026-03-09 01:20:32.939078 | orchestrator | TASK [service-ks-register : octavia | Creating/deleting endpoints] ************* 2026-03-09 01:20:32.939089 | orchestrator | Monday 09 March 2026 01:15:16 +0000 (0:00:03.764) 0:00:05.566 ********** 2026-03-09 01:20:32.939098 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-03-09 01:20:32.939109 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-03-09 01:20:32.939118 | orchestrator | 2026-03-09 01:20:32.939128 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-03-09 01:20:32.939138 | orchestrator | Monday 09 March 2026 01:15:23 +0000 (0:00:07.045) 0:00:12.611 ********** 2026-03-09 01:20:32.939148 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-09 01:20:32.939157 | orchestrator | 2026-03-09 01:20:32.939166 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-03-09 01:20:32.939176 | orchestrator | Monday 09 March 2026 01:15:27 +0000 (0:00:03.971) 0:00:16.583 ********** 2026-03-09 01:20:32.939186 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-09 01:20:32.939197 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-09 01:20:32.939206 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-09 01:20:32.939215 | orchestrator | 2026-03-09 01:20:32.939226 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-03-09 01:20:32.939234 | orchestrator | Monday 09 March 2026 01:15:37 +0000 (0:00:09.563) 0:00:26.146 ********** 2026-03-09 01:20:32.939244 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-09 01:20:32.939253 | orchestrator | 2026-03-09 01:20:32.939262 | orchestrator | TASK [service-ks-register : octavia | Granting/revoking user roles] ************ 2026-03-09 01:20:32.939272 | orchestrator | Monday 09 March 2026 01:15:41 +0000 (0:00:03.943) 0:00:30.090 ********** 2026-03-09 01:20:32.939282 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-09 01:20:32.939458 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-09 01:20:32.939471 | orchestrator | 2026-03-09 01:20:32.939483 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-03-09 01:20:32.939494 | orchestrator | Monday 09 March 2026 01:15:50 +0000 (0:00:08.895) 0:00:38.986 ********** 2026-03-09 01:20:32.939505 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-03-09 01:20:32.939516 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-03-09 01:20:32.939527 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-03-09 01:20:32.939787 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-03-09 01:20:32.939795 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-03-09 01:20:32.939802 | orchestrator | 2026-03-09 01:20:32.939808 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-09 01:20:32.939814 | orchestrator | Monday 09 March 2026 01:16:08 +0000 (0:00:18.267) 0:00:57.253 ********** 2026-03-09 01:20:32.939821 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:20:32.939827 | orchestrator | 2026-03-09 01:20:32.939833 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-03-09 01:20:32.939839 | orchestrator | Monday 09 March 2026 01:16:08 +0000 (0:00:00.588) 0:00:57.842 ********** 2026-03-09 01:20:32.939845 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:32.939852 | orchestrator | 2026-03-09 01:20:32.939869 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-03-09 01:20:32.939876 | orchestrator | Monday 09 March 2026 01:16:15 +0000 (0:00:06.197) 0:01:04.039 ********** 2026-03-09 01:20:32.939892 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:32.939899 | orchestrator | 2026-03-09 01:20:32.939905 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-09 01:20:32.939922 | orchestrator | Monday 09 March 2026 01:16:19 +0000 (0:00:04.343) 0:01:08.383 ********** 2026-03-09 01:20:32.939928 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:20:32.939935 | orchestrator | 2026-03-09 01:20:32.939941 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-03-09 01:20:32.939947 | orchestrator | Monday 09 March 2026 01:16:23 +0000 (0:00:03.657) 0:01:12.040 ********** 2026-03-09 01:20:32.939953 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-09 01:20:32.939960 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-09 01:20:32.939966 | orchestrator | 2026-03-09 01:20:32.939972 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-03-09 01:20:32.939978 | orchestrator | Monday 09 March 2026 01:16:34 +0000 (0:00:11.756) 0:01:23.797 ********** 2026-03-09 01:20:32.939984 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-03-09 01:20:32.939991 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-03-09 01:20:32.939999 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-03-09 01:20:32.940006 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-03-09 01:20:32.940013 | orchestrator | 2026-03-09 01:20:32.940019 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-03-09 01:20:32.940025 | orchestrator | Monday 09 March 2026 01:16:54 +0000 (0:00:19.297) 0:01:43.094 ********** 2026-03-09 01:20:32.940031 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:32.940037 | orchestrator | 2026-03-09 01:20:32.940087 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-03-09 01:20:32.940095 | orchestrator | Monday 09 March 2026 01:16:59 +0000 (0:00:05.500) 0:01:48.595 ********** 2026-03-09 01:20:32.940102 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:32.940109 | orchestrator | 2026-03-09 01:20:32.940115 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-03-09 01:20:32.940122 | orchestrator | Monday 09 March 2026 01:17:05 +0000 (0:00:05.929) 0:01:54.524 ********** 2026-03-09 01:20:32.940128 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:20:32.940135 | orchestrator | 2026-03-09 01:20:32.940414 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-03-09 01:20:32.940423 | orchestrator | Monday 09 March 2026 01:17:05 +0000 (0:00:00.227) 0:01:54.751 ********** 2026-03-09 01:20:32.940430 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:20:32.940436 | orchestrator | 2026-03-09 01:20:32.940442 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-09 01:20:32.940448 | orchestrator | Monday 09 March 2026 01:17:10 +0000 (0:00:04.291) 0:01:59.043 ********** 2026-03-09 01:20:32.940455 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:20:32.940461 | orchestrator | 2026-03-09 01:20:32.940467 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-03-09 01:20:32.940474 | orchestrator | Monday 09 March 2026 01:17:11 +0000 (0:00:01.106) 0:02:00.150 ********** 2026-03-09 01:20:32.940480 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:20:32.940486 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:32.940492 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:20:32.940499 | orchestrator | 2026-03-09 01:20:32.940505 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-03-09 01:20:32.940511 | orchestrator | Monday 09 March 2026 01:17:17 +0000 (0:00:06.513) 0:02:06.663 ********** 2026-03-09 01:20:32.940525 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:32.940531 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:20:32.940537 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:20:32.940543 | orchestrator | 2026-03-09 01:20:32.940549 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-03-09 01:20:32.940556 | orchestrator | Monday 09 March 2026 01:17:21 +0000 (0:00:03.696) 0:02:10.360 ********** 2026-03-09 01:20:32.940562 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:32.940568 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:20:32.940574 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:20:32.940580 | orchestrator | 2026-03-09 01:20:32.940586 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-03-09 01:20:32.940593 | orchestrator | Monday 09 March 2026 01:17:22 +0000 (0:00:00.783) 0:02:11.143 ********** 2026-03-09 01:20:32.940599 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:20:32.940605 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:20:32.940611 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:20:32.940617 | orchestrator | 2026-03-09 01:20:32.940623 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-03-09 01:20:32.940629 | orchestrator | Monday 09 March 2026 01:17:24 +0000 (0:00:02.224) 0:02:13.367 ********** 2026-03-09 01:20:32.940636 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:32.940642 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:20:32.940648 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:20:32.940654 | orchestrator | 2026-03-09 01:20:32.940660 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-03-09 01:20:32.940666 | orchestrator | Monday 09 March 2026 01:17:25 +0000 (0:00:01.466) 0:02:14.833 ********** 2026-03-09 01:20:32.940673 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:32.940679 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:20:32.940690 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:20:32.940697 | orchestrator | 2026-03-09 01:20:32.940703 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-03-09 01:20:32.940709 | orchestrator | Monday 09 March 2026 01:17:27 +0000 (0:00:01.359) 0:02:16.193 ********** 2026-03-09 01:20:32.940716 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:32.940722 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:20:32.940728 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:20:32.940734 | orchestrator | 2026-03-09 01:20:32.940764 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-03-09 01:20:32.940772 | orchestrator | Monday 09 March 2026 01:17:29 +0000 (0:00:02.185) 0:02:18.379 ********** 2026-03-09 01:20:32.940778 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:32.940784 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:20:32.940790 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:20:32.940796 | orchestrator | 2026-03-09 01:20:32.940803 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-03-09 01:20:32.940809 | orchestrator | Monday 09 March 2026 01:17:32 +0000 (0:00:02.984) 0:02:21.363 ********** 2026-03-09 01:20:32.940815 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:20:32.940821 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:20:32.940827 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:20:32.940834 | orchestrator | 2026-03-09 01:20:32.940841 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-03-09 01:20:32.940852 | orchestrator | Monday 09 March 2026 01:17:33 +0000 (0:00:00.717) 0:02:22.081 ********** 2026-03-09 01:20:32.940864 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:20:32.940891 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:20:32.940902 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:20:32.940913 | orchestrator | 2026-03-09 01:20:32.940923 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-09 01:20:32.940932 | orchestrator | Monday 09 March 2026 01:17:36 +0000 (0:00:02.973) 0:02:25.055 ********** 2026-03-09 01:20:32.940942 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:20:32.940960 | orchestrator | 2026-03-09 01:20:32.940970 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-03-09 01:20:32.940980 | orchestrator | Monday 09 March 2026 01:17:36 +0000 (0:00:00.808) 0:02:25.863 ********** 2026-03-09 01:20:32.940990 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:20:32.941001 | orchestrator | 2026-03-09 01:20:32.941014 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-09 01:20:32.941026 | orchestrator | Monday 09 March 2026 01:17:40 +0000 (0:00:03.822) 0:02:29.685 ********** 2026-03-09 01:20:32.941039 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:20:32.941051 | orchestrator | 2026-03-09 01:20:32.941062 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-03-09 01:20:32.941074 | orchestrator | Monday 09 March 2026 01:17:44 +0000 (0:00:03.678) 0:02:33.364 ********** 2026-03-09 01:20:32.941084 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-09 01:20:32.941095 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-09 01:20:32.941107 | orchestrator | 2026-03-09 01:20:32.941118 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-03-09 01:20:32.941130 | orchestrator | Monday 09 March 2026 01:17:51 +0000 (0:00:07.061) 0:02:40.425 ********** 2026-03-09 01:20:32.941141 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:20:32.941153 | orchestrator | 2026-03-09 01:20:32.941163 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-03-09 01:20:32.941171 | orchestrator | Monday 09 March 2026 01:17:55 +0000 (0:00:03.735) 0:02:44.160 ********** 2026-03-09 01:20:32.941178 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:20:32.941186 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:20:32.941193 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:20:32.941200 | orchestrator | 2026-03-09 01:20:32.941207 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-03-09 01:20:32.941215 | orchestrator | Monday 09 March 2026 01:17:55 +0000 (0:00:00.384) 0:02:44.545 ********** 2026-03-09 01:20:32.941225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:20:32.941278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:20:32.941294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:20:32.941301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:20:32.941308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:20:32.941314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:20:32.941321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:32.941332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:32.941359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:32.941372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:32.941379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:32.941385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:32.941417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:20:32.941424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:20:32.941457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:20:32.941469 | orchestrator | 2026-03-09 01:20:32.941476 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-03-09 01:20:32.941482 | orchestrator | Monday 09 March 2026 01:17:58 +0000 (0:00:02.621) 0:02:47.166 ********** 2026-03-09 01:20:32.941488 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:20:32.941495 | orchestrator | 2026-03-09 01:20:32.941501 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-03-09 01:20:32.941507 | orchestrator | Monday 09 March 2026 01:17:58 +0000 (0:00:00.168) 0:02:47.335 ********** 2026-03-09 01:20:32.941513 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:20:32.941519 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:20:32.941525 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:20:32.941531 | orchestrator | 2026-03-09 01:20:32.941538 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-03-09 01:20:32.941544 | orchestrator | Monday 09 March 2026 01:17:58 +0000 (0:00:00.571) 0:02:47.907 ********** 2026-03-09 01:20:32.941551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-09 01:20:32.941558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 01:20:32.941564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 01:20:32.941571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 01:20:32.941586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:20:32.941593 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:20:32.941617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-09 01:20:32.941624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 01:20:32.941631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 01:20:32.941637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 01:20:32.941644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:20:32.941655 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:20:32.941682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-09 01:20:32.941690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 01:20:32.941697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 01:20:32.941703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 01:20:32.941710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:20:32.941716 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:20:32.941722 | orchestrator | 2026-03-09 01:20:32.941729 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-09 01:20:32.941735 | orchestrator | Monday 09 March 2026 01:17:59 +0000 (0:00:00.765) 0:02:48.672 ********** 2026-03-09 01:20:32.941741 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:20:32.941752 | orchestrator | 2026-03-09 01:20:32.941758 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-03-09 01:20:32.941764 | orchestrator | Monday 09 March 2026 01:18:00 +0000 (0:00:00.679) 0:02:49.352 ********** 2026-03-09 01:20:32.941774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:20:32.941798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:20:32.941806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:20:32.941813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:20:32.941819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:20:32.941830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:20:32.941840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:32.941863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:32.941870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:32.941877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:32.941884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:32.941895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:32.941902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:20:32.941929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:20:32.941937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:20:32.941943 | orchestrator | 2026-03-09 01:20:32.941949 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-03-09 01:20:32.941956 | orchestrator | Monday 09 March 2026 01:18:06 +0000 (0:00:06.011) 0:02:55.363 ********** 2026-03-09 01:20:32.941962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-09 01:20:32.941969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 01:20:32.941979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 01:20:32.941986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 01:20:32.942012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:20:32.942061 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:20:32.942068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-09 01:20:32.942074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 01:20:32.942081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 01:20:32.942092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 01:20:32.942099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:20:32.942106 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:20:32.942123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-09 01:20:32.942130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 01:20:32.942137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 01:20:32.942143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 01:20:32.942154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:20:32.942160 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:20:32.942167 | orchestrator | 2026-03-09 01:20:32.942173 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-03-09 01:20:32.942179 | orchestrator | Monday 09 March 2026 01:18:07 +0000 (0:00:00.912) 0:02:56.275 ********** 2026-03-09 01:20:32.942189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-09 01:20:32.942204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 01:20:32.942211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 01:20:32.942217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 01:20:32.942228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:20:32.942235 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:20:32.942241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-09 01:20:32.942251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 01:20:32.942262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 01:20:32.942269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 01:20:32.942276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:20:32.942286 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:20:32.942292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-09 01:20:32.942299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 01:20:32.942306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 01:20:32.942320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 01:20:32.942327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:20:32.942334 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:20:32.942340 | orchestrator | 2026-03-09 01:20:32.942346 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-03-09 01:20:32.942353 | orchestrator | Monday 09 March 2026 01:18:08 +0000 (0:00:00.878) 0:02:57.154 ********** 2026-03-09 01:20:32.942363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:20:32.942370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:20:32.942380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:20:32.942439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:20:32.942447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:20:32.942454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:20:32.942467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:32.942473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:32.942480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:32.942490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:32.942502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:32.942509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:32.942520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:20:32.942527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:20:32.942533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:20:32.942539 | orchestrator | 2026-03-09 01:20:32.942546 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-03-09 01:20:32.942552 | orchestrator | Monday 09 March 2026 01:18:13 +0000 (0:00:05.418) 0:03:02.572 ********** 2026-03-09 01:20:32.942559 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-09 01:20:32.942565 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-09 01:20:32.942571 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-09 01:20:32.942578 | orchestrator | 2026-03-09 01:20:32.942584 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-03-09 01:20:32.942590 | orchestrator | Monday 09 March 2026 01:18:15 +0000 (0:00:01.886) 0:03:04.458 ********** 2026-03-09 01:20:32.942604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:20:32.942615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:20:32.942622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:20:32.942629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:20:32.942636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:20:32.942642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:20:32.942655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:32.942669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:32.942675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:32.942682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:32.942688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:32.942695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:32.942705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:20:32.942720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:20:32.942727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:20:32.942734 | orchestrator | 2026-03-09 01:20:32.942740 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-03-09 01:20:32.942746 | orchestrator | Monday 09 March 2026 01:18:33 +0000 (0:00:18.090) 0:03:22.549 ********** 2026-03-09 01:20:32.942752 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:32.942759 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:20:32.942765 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:20:32.942771 | orchestrator | 2026-03-09 01:20:32.942777 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-03-09 01:20:32.942783 | orchestrator | Monday 09 March 2026 01:18:35 +0000 (0:00:01.552) 0:03:24.101 ********** 2026-03-09 01:20:32.942789 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-09 01:20:32.942796 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-09 01:20:32.942802 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-09 01:20:32.942808 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-09 01:20:32.942814 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-09 01:20:32.942820 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-09 01:20:32.942827 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-09 01:20:32.942833 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-09 01:20:32.942839 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-09 01:20:32.942845 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-09 01:20:32.942851 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-09 01:20:32.942857 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-09 01:20:32.942863 | orchestrator | 2026-03-09 01:20:32.942869 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-03-09 01:20:32.942875 | orchestrator | Monday 09 March 2026 01:18:40 +0000 (0:00:05.800) 0:03:29.902 ********** 2026-03-09 01:20:32.942882 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-09 01:20:32.942888 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-09 01:20:32.942894 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-09 01:20:32.942900 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-09 01:20:32.942906 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-09 01:20:32.942912 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-09 01:20:32.942918 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-09 01:20:32.942929 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-09 01:20:32.942935 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-09 01:20:32.942941 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-09 01:20:32.942947 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-09 01:20:32.942953 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-09 01:20:32.942959 | orchestrator | 2026-03-09 01:20:32.942965 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-03-09 01:20:32.942972 | orchestrator | Monday 09 March 2026 01:18:47 +0000 (0:00:06.380) 0:03:36.283 ********** 2026-03-09 01:20:32.942978 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-09 01:20:32.942984 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-09 01:20:32.942990 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-09 01:20:32.942999 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-09 01:20:32.943005 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-09 01:20:32.943012 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-09 01:20:32.943018 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-09 01:20:32.943024 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-09 01:20:32.943034 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-09 01:20:32.943040 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-09 01:20:32.943046 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-09 01:20:32.943052 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-09 01:20:32.943058 | orchestrator | 2026-03-09 01:20:32.943064 | orchestrator | TASK [service-check-containers : octavia | Check containers] ******************* 2026-03-09 01:20:32.943071 | orchestrator | Monday 09 March 2026 01:18:52 +0000 (0:00:05.461) 0:03:41.745 ********** 2026-03-09 01:20:32.943077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:20:32.943084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:20:32.943090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:20:32.943107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:20:32.943117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:20:32.943124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:20:32.943130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:32.943137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:32.943143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:32.943154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:32.943165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:32.943175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:32.943182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:20:32.943188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:20:32.943195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:20:32.943205 | orchestrator | 2026-03-09 01:20:32.943212 | orchestrator | TASK [service-check-containers : octavia | Notify handlers to restart containers] *** 2026-03-09 01:20:32.943218 | orchestrator | Monday 09 March 2026 01:18:57 +0000 (0:00:04.491) 0:03:46.236 ********** 2026-03-09 01:20:32.943224 | orchestrator | changed: [testbed-node-0] => { 2026-03-09 01:20:32.943230 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:20:32.943237 | orchestrator | } 2026-03-09 01:20:32.943243 | orchestrator | changed: [testbed-node-1] => { 2026-03-09 01:20:32.943249 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:20:32.943255 | orchestrator | } 2026-03-09 01:20:32.943262 | orchestrator | changed: [testbed-node-2] => { 2026-03-09 01:20:32.943268 | orchestrator |  "msg": "Notifying handlers" 2026-03-09 01:20:32.943274 | orchestrator | } 2026-03-09 01:20:32.943280 | orchestrator | 2026-03-09 01:20:32.943286 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-09 01:20:32.943292 | orchestrator | Monday 09 March 2026 01:18:57 +0000 (0:00:00.401) 0:03:46.638 ********** 2026-03-09 01:20:32.943302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-09 01:20:32.943314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 01:20:32.943321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 01:20:32.943328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 01:20:32.943338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:20:32.943345 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:20:32.943351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-09 01:20:32.943358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 01:20:32.943370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 01:20:32.943377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 01:20:32.943384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:20:32.943433 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:20:32.943440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-09 01:20:32.943446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 01:20:32.943453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 01:20:32.943466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 01:20:32.943473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:20:32.943480 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:20:32.943486 | orchestrator | 2026-03-09 01:20:32.943492 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-09 01:20:32.943498 | orchestrator | Monday 09 March 2026 01:18:59 +0000 (0:00:01.462) 0:03:48.100 ********** 2026-03-09 01:20:32.943505 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:20:32.943517 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:20:32.943523 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:20:32.943529 | orchestrator | 2026-03-09 01:20:32.943535 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-03-09 01:20:32.943542 | orchestrator | Monday 09 March 2026 01:18:59 +0000 (0:00:00.345) 0:03:48.446 ********** 2026-03-09 01:20:32.943548 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:32.943554 | orchestrator | 2026-03-09 01:20:32.943560 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-03-09 01:20:32.943566 | orchestrator | Monday 09 March 2026 01:19:01 +0000 (0:00:02.395) 0:03:50.841 ********** 2026-03-09 01:20:32.943572 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:32.943579 | orchestrator | 2026-03-09 01:20:32.943585 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-03-09 01:20:32.943591 | orchestrator | Monday 09 March 2026 01:19:04 +0000 (0:00:02.441) 0:03:53.283 ********** 2026-03-09 01:20:32.943597 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:32.943603 | orchestrator | 2026-03-09 01:20:32.943609 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-03-09 01:20:32.943616 | orchestrator | Monday 09 March 2026 01:19:06 +0000 (0:00:02.499) 0:03:55.782 ********** 2026-03-09 01:20:32.943622 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:32.943628 | orchestrator | 2026-03-09 01:20:32.943634 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-03-09 01:20:32.943640 | orchestrator | Monday 09 March 2026 01:19:09 +0000 (0:00:02.469) 0:03:58.252 ********** 2026-03-09 01:20:32.943646 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:32.943653 | orchestrator | 2026-03-09 01:20:32.943659 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-09 01:20:32.943665 | orchestrator | Monday 09 March 2026 01:19:34 +0000 (0:00:25.031) 0:04:23.283 ********** 2026-03-09 01:20:32.943671 | orchestrator | 2026-03-09 01:20:32.943677 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-09 01:20:32.943684 | orchestrator | Monday 09 March 2026 01:19:34 +0000 (0:00:00.098) 0:04:23.382 ********** 2026-03-09 01:20:32.943690 | orchestrator | 2026-03-09 01:20:32.943696 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-09 01:20:32.943702 | orchestrator | Monday 09 March 2026 01:19:34 +0000 (0:00:00.077) 0:04:23.460 ********** 2026-03-09 01:20:32.943708 | orchestrator | 2026-03-09 01:20:32.943715 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-03-09 01:20:32.943721 | orchestrator | Monday 09 March 2026 01:19:35 +0000 (0:00:00.475) 0:04:23.936 ********** 2026-03-09 01:20:32.943727 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:32.943733 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:20:32.943739 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:20:32.943745 | orchestrator | 2026-03-09 01:20:32.943751 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-03-09 01:20:32.943758 | orchestrator | Monday 09 March 2026 01:19:51 +0000 (0:00:16.042) 0:04:39.978 ********** 2026-03-09 01:20:32.943764 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:32.943770 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:20:32.943776 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:20:32.943782 | orchestrator | 2026-03-09 01:20:32.943789 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-03-09 01:20:32.943795 | orchestrator | Monday 09 March 2026 01:20:03 +0000 (0:00:12.589) 0:04:52.567 ********** 2026-03-09 01:20:32.943801 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:20:32.943807 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:32.943813 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:20:32.943819 | orchestrator | 2026-03-09 01:20:32.943825 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-03-09 01:20:32.943832 | orchestrator | Monday 09 March 2026 01:20:14 +0000 (0:00:11.100) 0:05:03.668 ********** 2026-03-09 01:20:32.943838 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:32.943848 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:20:32.943854 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:20:32.943860 | orchestrator | 2026-03-09 01:20:32.943866 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-03-09 01:20:32.943872 | orchestrator | Monday 09 March 2026 01:20:21 +0000 (0:00:06.330) 0:05:09.998 ********** 2026-03-09 01:20:32.943878 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:32.943888 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:20:32.943894 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:20:32.943900 | orchestrator | 2026-03-09 01:20:32.943906 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:20:32.943913 | orchestrator | testbed-node-0 : ok=58  changed=39  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-09 01:20:32.943923 | orchestrator | testbed-node-1 : ok=34  changed=23  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-09 01:20:32.943929 | orchestrator | testbed-node-2 : ok=34  changed=23  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-09 01:20:32.943936 | orchestrator | 2026-03-09 01:20:32.943942 | orchestrator | 2026-03-09 01:20:32.943948 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:20:32.943954 | orchestrator | Monday 09 March 2026 01:20:32 +0000 (0:00:11.200) 0:05:21.199 ********** 2026-03-09 01:20:32.943961 | orchestrator | =============================================================================== 2026-03-09 01:20:32.943967 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 25.03s 2026-03-09 01:20:32.943973 | orchestrator | octavia : Add rules for security groups -------------------------------- 19.30s 2026-03-09 01:20:32.943979 | orchestrator | octavia : Adding octavia related roles --------------------------------- 18.27s 2026-03-09 01:20:32.943985 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 18.09s 2026-03-09 01:20:32.943991 | orchestrator | octavia : Restart octavia-api container -------------------------------- 16.04s 2026-03-09 01:20:32.943997 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 12.59s 2026-03-09 01:20:32.944004 | orchestrator | octavia : Create security groups for octavia --------------------------- 11.76s 2026-03-09 01:20:32.944010 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 11.20s 2026-03-09 01:20:32.944016 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 11.10s 2026-03-09 01:20:32.944022 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 9.56s 2026-03-09 01:20:32.944028 | orchestrator | service-ks-register : octavia | Granting/revoking user roles ------------ 8.90s 2026-03-09 01:20:32.944034 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.06s 2026-03-09 01:20:32.944040 | orchestrator | service-ks-register : octavia | Creating/deleting endpoints ------------- 7.05s 2026-03-09 01:20:32.944046 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 6.51s 2026-03-09 01:20:32.944053 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 6.38s 2026-03-09 01:20:32.944059 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 6.33s 2026-03-09 01:20:32.944065 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 6.20s 2026-03-09 01:20:32.944071 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 6.01s 2026-03-09 01:20:32.944077 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.93s 2026-03-09 01:20:32.944084 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.80s 2026-03-09 01:20:32.944090 | orchestrator | 2026-03-09 01:20:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:20:35.985504 | orchestrator | 2026-03-09 01:20:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:20:39.024991 | orchestrator | 2026-03-09 01:20:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:20:42.064559 | orchestrator | 2026-03-09 01:20:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:20:45.104721 | orchestrator | 2026-03-09 01:20:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:20:48.155752 | orchestrator | 2026-03-09 01:20:48 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:20:51.194372 | orchestrator | 2026-03-09 01:20:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:20:54.230983 | orchestrator | 2026-03-09 01:20:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:20:57.271518 | orchestrator | 2026-03-09 01:20:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:21:00.309311 | orchestrator | 2026-03-09 01:21:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:21:03.379586 | orchestrator | 2026-03-09 01:21:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:21:06.428068 | orchestrator | 2026-03-09 01:21:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:21:09.473039 | orchestrator | 2026-03-09 01:21:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:21:12.518377 | orchestrator | 2026-03-09 01:21:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:21:15.556402 | orchestrator | 2026-03-09 01:21:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:21:18.603927 | orchestrator | 2026-03-09 01:21:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:21:21.651897 | orchestrator | 2026-03-09 01:21:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:21:24.700255 | orchestrator | 2026-03-09 01:21:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:21:27.743018 | orchestrator | 2026-03-09 01:21:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:21:30.788596 | orchestrator | 2026-03-09 01:21:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:21:33.834624 | orchestrator | 2026-03-09 01:21:34.209497 | orchestrator | 2026-03-09 01:21:34.212792 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Mon Mar 9 01:21:34 UTC 2026 2026-03-09 01:21:34.212859 | orchestrator | 2026-03-09 01:21:34.633605 | orchestrator | ok: Runtime: 0:39:48.384619 2026-03-09 01:21:34.900262 | 2026-03-09 01:21:34.900439 | TASK [Bootstrap services] 2026-03-09 01:21:35.618531 | orchestrator | 2026-03-09 01:21:35.618652 | orchestrator | # BOOTSTRAP 2026-03-09 01:21:35.618662 | orchestrator | 2026-03-09 01:21:35.618667 | orchestrator | + set -e 2026-03-09 01:21:35.618672 | orchestrator | + echo 2026-03-09 01:21:35.618678 | orchestrator | + echo '# BOOTSTRAP' 2026-03-09 01:21:35.618685 | orchestrator | + echo 2026-03-09 01:21:35.618705 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-03-09 01:21:35.626761 | orchestrator | + set -e 2026-03-09 01:21:35.626848 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-03-09 01:21:40.853807 | orchestrator | 2026-03-09 01:21:40 | INFO  | It takes a moment until task 8f4ac688-a718-4321-9f0b-0700eb586e2d (flavor-manager) has been started and output is visible here. 2026-03-09 01:21:49.364747 | orchestrator | 2026-03-09 01:21:43 | INFO  | Flavor SCS-1L-1 created 2026-03-09 01:21:49.364884 | orchestrator | 2026-03-09 01:21:44 | INFO  | Flavor SCS-1L-1-5 created 2026-03-09 01:21:49.364913 | orchestrator | 2026-03-09 01:21:44 | INFO  | Flavor SCS-1V-2 created 2026-03-09 01:21:49.364934 | orchestrator | 2026-03-09 01:21:44 | INFO  | Flavor SCS-1V-2-5 created 2026-03-09 01:21:49.364955 | orchestrator | 2026-03-09 01:21:44 | INFO  | Flavor SCS-1V-4 created 2026-03-09 01:21:49.364975 | orchestrator | 2026-03-09 01:21:45 | INFO  | Flavor SCS-1V-4-10 created 2026-03-09 01:21:49.364986 | orchestrator | 2026-03-09 01:21:45 | INFO  | Flavor SCS-1V-8 created 2026-03-09 01:21:49.364999 | orchestrator | 2026-03-09 01:21:45 | INFO  | Flavor SCS-1V-8-20 created 2026-03-09 01:21:49.365024 | orchestrator | 2026-03-09 01:21:45 | INFO  | Flavor SCS-2V-4 created 2026-03-09 01:21:49.365036 | orchestrator | 2026-03-09 01:21:45 | INFO  | Flavor SCS-2V-4-10 created 2026-03-09 01:21:49.365047 | orchestrator | 2026-03-09 01:21:45 | INFO  | Flavor SCS-2V-8 created 2026-03-09 01:21:49.365058 | orchestrator | 2026-03-09 01:21:46 | INFO  | Flavor SCS-2V-8-20 created 2026-03-09 01:21:49.365069 | orchestrator | 2026-03-09 01:21:46 | INFO  | Flavor SCS-2V-16 created 2026-03-09 01:21:49.365079 | orchestrator | 2026-03-09 01:21:46 | INFO  | Flavor SCS-2V-16-50 created 2026-03-09 01:21:49.365090 | orchestrator | 2026-03-09 01:21:46 | INFO  | Flavor SCS-4V-8 created 2026-03-09 01:21:49.365101 | orchestrator | 2026-03-09 01:21:46 | INFO  | Flavor SCS-4V-8-20 created 2026-03-09 01:21:49.365112 | orchestrator | 2026-03-09 01:21:47 | INFO  | Flavor SCS-4V-16 created 2026-03-09 01:21:49.365122 | orchestrator | 2026-03-09 01:21:47 | INFO  | Flavor SCS-4V-16-50 created 2026-03-09 01:21:49.365133 | orchestrator | 2026-03-09 01:21:47 | INFO  | Flavor SCS-4V-32 created 2026-03-09 01:21:49.365144 | orchestrator | 2026-03-09 01:21:47 | INFO  | Flavor SCS-4V-32-100 created 2026-03-09 01:21:49.365155 | orchestrator | 2026-03-09 01:21:47 | INFO  | Flavor SCS-8V-16 created 2026-03-09 01:21:49.365166 | orchestrator | 2026-03-09 01:21:47 | INFO  | Flavor SCS-8V-16-50 created 2026-03-09 01:21:49.365178 | orchestrator | 2026-03-09 01:21:47 | INFO  | Flavor SCS-8V-32 created 2026-03-09 01:21:49.365188 | orchestrator | 2026-03-09 01:21:48 | INFO  | Flavor SCS-8V-32-100 created 2026-03-09 01:21:49.365199 | orchestrator | 2026-03-09 01:21:48 | INFO  | Flavor SCS-16V-32 created 2026-03-09 01:21:49.365210 | orchestrator | 2026-03-09 01:21:48 | INFO  | Flavor SCS-16V-32-100 created 2026-03-09 01:21:49.365221 | orchestrator | 2026-03-09 01:21:48 | INFO  | Flavor SCS-2V-4-20s created 2026-03-09 01:21:49.365231 | orchestrator | 2026-03-09 01:21:48 | INFO  | Flavor SCS-4V-8-50s created 2026-03-09 01:21:49.365242 | orchestrator | 2026-03-09 01:21:48 | INFO  | Flavor SCS-4V-16-100s created 2026-03-09 01:21:49.365253 | orchestrator | 2026-03-09 01:21:49 | INFO  | Flavor SCS-8V-32-100s created 2026-03-09 01:21:51.799469 | orchestrator | 2026-03-09 01:21:51 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-03-09 01:21:51.809305 | orchestrator | 2026-03-09 01:21:51 | INFO  | Prepare task for execution of bootstrap-basic. 2026-03-09 01:21:51.877393 | orchestrator | 2026-03-09 01:21:51 | INFO  | Task f12877ef-b8a5-4cd1-8a5c-cdd007bbe919 (bootstrap-basic) was prepared for execution. 2026-03-09 01:21:51.877481 | orchestrator | 2026-03-09 01:21:51 | INFO  | It takes a moment until task f12877ef-b8a5-4cd1-8a5c-cdd007bbe919 (bootstrap-basic) has been started and output is visible here. 2026-03-09 01:22:38.955552 | orchestrator | 2026-03-09 01:22:38.955830 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-03-09 01:22:38.955851 | orchestrator | 2026-03-09 01:22:38.955861 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-09 01:22:38.955869 | orchestrator | Monday 09 March 2026 01:21:56 +0000 (0:00:00.069) 0:00:00.069 ********** 2026-03-09 01:22:38.955878 | orchestrator | ok: [localhost] 2026-03-09 01:22:38.955887 | orchestrator | 2026-03-09 01:22:38.955896 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-03-09 01:22:38.955904 | orchestrator | Monday 09 March 2026 01:21:58 +0000 (0:00:01.932) 0:00:02.001 ********** 2026-03-09 01:22:38.955914 | orchestrator | ok: [localhost] 2026-03-09 01:22:38.955922 | orchestrator | 2026-03-09 01:22:38.955930 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-03-09 01:22:38.955938 | orchestrator | Monday 09 March 2026 01:22:06 +0000 (0:00:08.151) 0:00:10.153 ********** 2026-03-09 01:22:38.955946 | orchestrator | changed: [localhost] 2026-03-09 01:22:38.955955 | orchestrator | 2026-03-09 01:22:38.955963 | orchestrator | TASK [Create public network] *************************************************** 2026-03-09 01:22:38.955971 | orchestrator | Monday 09 March 2026 01:22:14 +0000 (0:00:08.188) 0:00:18.341 ********** 2026-03-09 01:22:38.955979 | orchestrator | changed: [localhost] 2026-03-09 01:22:38.955987 | orchestrator | 2026-03-09 01:22:38.955998 | orchestrator | TASK [Set public network to default] ******************************************* 2026-03-09 01:22:38.956006 | orchestrator | Monday 09 March 2026 01:22:19 +0000 (0:00:05.123) 0:00:23.464 ********** 2026-03-09 01:22:38.956014 | orchestrator | changed: [localhost] 2026-03-09 01:22:38.956022 | orchestrator | 2026-03-09 01:22:38.956030 | orchestrator | TASK [Create public subnet] **************************************************** 2026-03-09 01:22:38.956038 | orchestrator | Monday 09 March 2026 01:22:26 +0000 (0:00:06.753) 0:00:30.218 ********** 2026-03-09 01:22:38.956045 | orchestrator | changed: [localhost] 2026-03-09 01:22:38.956053 | orchestrator | 2026-03-09 01:22:38.956061 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-03-09 01:22:38.956069 | orchestrator | Monday 09 March 2026 01:22:31 +0000 (0:00:04.751) 0:00:34.969 ********** 2026-03-09 01:22:38.956077 | orchestrator | changed: [localhost] 2026-03-09 01:22:38.956085 | orchestrator | 2026-03-09 01:22:38.956095 | orchestrator | TASK [Create manager role] ***************************************************** 2026-03-09 01:22:38.956113 | orchestrator | Monday 09 March 2026 01:22:35 +0000 (0:00:03.982) 0:00:38.951 ********** 2026-03-09 01:22:38.956124 | orchestrator | ok: [localhost] 2026-03-09 01:22:38.956133 | orchestrator | 2026-03-09 01:22:38.956142 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:22:38.956151 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:22:38.956161 | orchestrator | 2026-03-09 01:22:38.956171 | orchestrator | 2026-03-09 01:22:38.956180 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:22:38.956189 | orchestrator | Monday 09 March 2026 01:22:38 +0000 (0:00:03.684) 0:00:42.635 ********** 2026-03-09 01:22:38.956198 | orchestrator | =============================================================================== 2026-03-09 01:22:38.956207 | orchestrator | Create volume type LUKS ------------------------------------------------- 8.19s 2026-03-09 01:22:38.956235 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.15s 2026-03-09 01:22:38.956245 | orchestrator | Set public network to default ------------------------------------------- 6.75s 2026-03-09 01:22:38.956254 | orchestrator | Create public network --------------------------------------------------- 5.12s 2026-03-09 01:22:38.956263 | orchestrator | Create public subnet ---------------------------------------------------- 4.75s 2026-03-09 01:22:38.956272 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.98s 2026-03-09 01:22:38.956281 | orchestrator | Create manager role ----------------------------------------------------- 3.68s 2026-03-09 01:22:38.956291 | orchestrator | Gathering Facts --------------------------------------------------------- 1.93s 2026-03-09 01:22:41.486891 | orchestrator | 2026-03-09 01:22:41 | INFO  | It takes a moment until task a9efed71-294a-4e2e-9193-aa0aef323025 (image-manager) has been started and output is visible here. 2026-03-09 01:23:20.691342 | orchestrator | 2026-03-09 01:22:44 | INFO  | Processing image 'Cirros 0.6.2' 2026-03-09 01:23:20.691495 | orchestrator | 2026-03-09 01:22:44 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-03-09 01:23:20.691516 | orchestrator | 2026-03-09 01:22:44 | INFO  | Importing image Cirros 0.6.2 2026-03-09 01:23:20.691529 | orchestrator | 2026-03-09 01:22:44 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-09 01:23:20.691541 | orchestrator | 2026-03-09 01:22:46 | INFO  | Waiting for image to leave queued state... 2026-03-09 01:23:20.691553 | orchestrator | 2026-03-09 01:22:48 | INFO  | Waiting for import to complete... 2026-03-09 01:23:20.691564 | orchestrator | 2026-03-09 01:22:58 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-03-09 01:23:20.691576 | orchestrator | 2026-03-09 01:22:59 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-03-09 01:23:20.691587 | orchestrator | 2026-03-09 01:22:59 | INFO  | Setting internal_version = 0.6.2 2026-03-09 01:23:20.691598 | orchestrator | 2026-03-09 01:22:59 | INFO  | Setting image_original_user = cirros 2026-03-09 01:23:20.691609 | orchestrator | 2026-03-09 01:22:59 | INFO  | Adding tag os:cirros 2026-03-09 01:23:20.691620 | orchestrator | 2026-03-09 01:22:59 | INFO  | Setting property architecture: x86_64 2026-03-09 01:23:20.691631 | orchestrator | 2026-03-09 01:22:59 | INFO  | Setting property hw_disk_bus: scsi 2026-03-09 01:23:20.691641 | orchestrator | 2026-03-09 01:22:59 | INFO  | Setting property hw_rng_model: virtio 2026-03-09 01:23:20.691652 | orchestrator | 2026-03-09 01:22:59 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-09 01:23:20.691663 | orchestrator | 2026-03-09 01:23:00 | INFO  | Setting property hw_watchdog_action: reset 2026-03-09 01:23:20.691674 | orchestrator | 2026-03-09 01:23:00 | INFO  | Setting property hypervisor_type: qemu 2026-03-09 01:23:20.691696 | orchestrator | 2026-03-09 01:23:00 | INFO  | Setting property os_distro: cirros 2026-03-09 01:23:20.691707 | orchestrator | 2026-03-09 01:23:00 | INFO  | Setting property os_purpose: minimal 2026-03-09 01:23:20.691718 | orchestrator | 2026-03-09 01:23:00 | INFO  | Setting property replace_frequency: never 2026-03-09 01:23:20.691728 | orchestrator | 2026-03-09 01:23:00 | INFO  | Setting property uuid_validity: none 2026-03-09 01:23:20.691739 | orchestrator | 2026-03-09 01:23:01 | INFO  | Setting property provided_until: none 2026-03-09 01:23:20.691750 | orchestrator | 2026-03-09 01:23:01 | INFO  | Setting property image_description: Cirros 2026-03-09 01:23:20.691761 | orchestrator | 2026-03-09 01:23:01 | INFO  | Setting property image_name: Cirros 2026-03-09 01:23:20.691797 | orchestrator | 2026-03-09 01:23:01 | INFO  | Setting property internal_version: 0.6.2 2026-03-09 01:23:20.691810 | orchestrator | 2026-03-09 01:23:02 | INFO  | Setting property image_original_user: cirros 2026-03-09 01:23:20.691822 | orchestrator | 2026-03-09 01:23:02 | INFO  | Setting property os_version: 0.6.2 2026-03-09 01:23:20.691835 | orchestrator | 2026-03-09 01:23:02 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-09 01:23:20.691849 | orchestrator | 2026-03-09 01:23:02 | INFO  | Setting property image_build_date: 2023-05-30 2026-03-09 01:23:20.691861 | orchestrator | 2026-03-09 01:23:02 | INFO  | Checking status of 'Cirros 0.6.2' 2026-03-09 01:23:20.691874 | orchestrator | 2026-03-09 01:23:02 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-03-09 01:23:20.691891 | orchestrator | 2026-03-09 01:23:02 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-03-09 01:23:20.691904 | orchestrator | 2026-03-09 01:23:03 | INFO  | Processing image 'Cirros 0.6.3' 2026-03-09 01:23:20.691916 | orchestrator | 2026-03-09 01:23:03 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-03-09 01:23:20.691928 | orchestrator | 2026-03-09 01:23:03 | INFO  | Importing image Cirros 0.6.3 2026-03-09 01:23:20.691941 | orchestrator | 2026-03-09 01:23:03 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-09 01:23:20.691954 | orchestrator | 2026-03-09 01:23:03 | INFO  | Waiting for image to leave queued state... 2026-03-09 01:23:20.691966 | orchestrator | 2026-03-09 01:23:05 | INFO  | Waiting for import to complete... 2026-03-09 01:23:20.691996 | orchestrator | 2026-03-09 01:23:15 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-03-09 01:23:20.692010 | orchestrator | 2026-03-09 01:23:16 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-03-09 01:23:20.692022 | orchestrator | 2026-03-09 01:23:16 | INFO  | Setting internal_version = 0.6.3 2026-03-09 01:23:20.692035 | orchestrator | 2026-03-09 01:23:16 | INFO  | Setting image_original_user = cirros 2026-03-09 01:23:20.692047 | orchestrator | 2026-03-09 01:23:16 | INFO  | Adding tag os:cirros 2026-03-09 01:23:20.692059 | orchestrator | 2026-03-09 01:23:16 | INFO  | Setting property architecture: x86_64 2026-03-09 01:23:20.692072 | orchestrator | 2026-03-09 01:23:16 | INFO  | Setting property hw_disk_bus: scsi 2026-03-09 01:23:20.692084 | orchestrator | 2026-03-09 01:23:16 | INFO  | Setting property hw_rng_model: virtio 2026-03-09 01:23:20.692097 | orchestrator | 2026-03-09 01:23:17 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-09 01:23:20.692107 | orchestrator | 2026-03-09 01:23:17 | INFO  | Setting property hw_watchdog_action: reset 2026-03-09 01:23:20.692118 | orchestrator | 2026-03-09 01:23:17 | INFO  | Setting property hypervisor_type: qemu 2026-03-09 01:23:20.692129 | orchestrator | 2026-03-09 01:23:17 | INFO  | Setting property os_distro: cirros 2026-03-09 01:23:20.692140 | orchestrator | 2026-03-09 01:23:17 | INFO  | Setting property os_purpose: minimal 2026-03-09 01:23:20.692150 | orchestrator | 2026-03-09 01:23:18 | INFO  | Setting property replace_frequency: never 2026-03-09 01:23:20.692161 | orchestrator | 2026-03-09 01:23:18 | INFO  | Setting property uuid_validity: none 2026-03-09 01:23:20.692172 | orchestrator | 2026-03-09 01:23:18 | INFO  | Setting property provided_until: none 2026-03-09 01:23:20.692182 | orchestrator | 2026-03-09 01:23:18 | INFO  | Setting property image_description: Cirros 2026-03-09 01:23:20.692202 | orchestrator | 2026-03-09 01:23:18 | INFO  | Setting property image_name: Cirros 2026-03-09 01:23:20.692213 | orchestrator | 2026-03-09 01:23:19 | INFO  | Setting property internal_version: 0.6.3 2026-03-09 01:23:20.692224 | orchestrator | 2026-03-09 01:23:19 | INFO  | Setting property image_original_user: cirros 2026-03-09 01:23:20.692234 | orchestrator | 2026-03-09 01:23:19 | INFO  | Setting property os_version: 0.6.3 2026-03-09 01:23:20.692245 | orchestrator | 2026-03-09 01:23:19 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-09 01:23:20.692256 | orchestrator | 2026-03-09 01:23:19 | INFO  | Setting property image_build_date: 2024-09-26 2026-03-09 01:23:20.692267 | orchestrator | 2026-03-09 01:23:20 | INFO  | Checking status of 'Cirros 0.6.3' 2026-03-09 01:23:20.692277 | orchestrator | 2026-03-09 01:23:20 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-03-09 01:23:20.692288 | orchestrator | 2026-03-09 01:23:20 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-03-09 01:23:21.026191 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-03-09 01:23:23.687903 | orchestrator | 2026-03-09 01:23:23 | INFO  | date: 2026-03-08 2026-03-09 01:23:23.687989 | orchestrator | 2026-03-09 01:23:23 | INFO  | image: octavia-amphora-haproxy-2025.1.20260308.qcow2 2026-03-09 01:23:23.688024 | orchestrator | 2026-03-09 01:23:23 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260308.qcow2 2026-03-09 01:23:23.688037 | orchestrator | 2026-03-09 01:23:23 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260308.qcow2.CHECKSUM 2026-03-09 01:23:23.779681 | orchestrator | 2026-03-09 01:23:23 | INFO  | checksum: localhost | ok: "/var/lib/zuul/builds/e666ea591f8a46f2993184f9863979bf/work/logs" 2026-03-09 01:23:57.302918 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/e666ea591f8a46f2993184f9863979bf/work/artifacts" 2026-03-09 01:23:57.577766 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/e666ea591f8a46f2993184f9863979bf/work/docs" 2026-03-09 01:23:57.603186 | 2026-03-09 01:23:57.603393 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-03-09 01:23:58.594586 | orchestrator | changed: .d..t...... ./ 2026-03-09 01:23:58.594995 | orchestrator | changed: All items complete 2026-03-09 01:23:58.595174 | 2026-03-09 01:23:59.308347 | orchestrator | changed: .d..t...... ./ 2026-03-09 01:24:00.023161 | orchestrator | changed: .d..t...... ./ 2026-03-09 01:24:00.049054 | 2026-03-09 01:24:00.049199 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-03-09 01:24:00.081324 | orchestrator | skipping: Conditional result was False 2026-03-09 01:24:00.085045 | orchestrator | skipping: Conditional result was False 2026-03-09 01:24:00.105677 | 2026-03-09 01:24:00.105803 | PLAY RECAP 2026-03-09 01:24:00.105868 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-03-09 01:24:00.105903 | 2026-03-09 01:24:00.236002 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-09 01:24:00.237107 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-09 01:24:00.956253 | 2026-03-09 01:24:00.956411 | PLAY [Base post] 2026-03-09 01:24:00.970574 | 2026-03-09 01:24:00.970709 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-03-09 01:24:01.978582 | orchestrator | changed 2026-03-09 01:24:01.990319 | 2026-03-09 01:24:01.990451 | PLAY RECAP 2026-03-09 01:24:01.990533 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-03-09 01:24:01.990613 | 2026-03-09 01:24:02.114513 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-09 01:24:02.117190 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-03-09 01:24:02.946277 | 2026-03-09 01:24:02.946460 | PLAY [Base post-logs] 2026-03-09 01:24:02.957051 | 2026-03-09 01:24:02.957190 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-03-09 01:24:03.428316 | localhost | changed 2026-03-09 01:24:03.438428 | 2026-03-09 01:24:03.438584 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-03-09 01:24:03.474812 | localhost | ok 2026-03-09 01:24:03.479597 | 2026-03-09 01:24:03.479737 | TASK [Set zuul-log-path fact] 2026-03-09 01:24:03.496488 | localhost | ok 2026-03-09 01:24:03.510273 | 2026-03-09 01:24:03.510430 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-09 01:24:03.548565 | localhost | ok 2026-03-09 01:24:03.555885 | 2026-03-09 01:24:03.556091 | TASK [upload-logs : Create log directories] 2026-03-09 01:24:04.086352 | localhost | changed 2026-03-09 01:24:04.091936 | 2026-03-09 01:24:04.092126 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-03-09 01:24:04.629061 | localhost -> localhost | ok: Runtime: 0:00:00.008603 2026-03-09 01:24:04.639174 | 2026-03-09 01:24:04.639401 | TASK [upload-logs : Upload logs to log server] 2026-03-09 01:24:05.227869 | localhost | Output suppressed because no_log was given 2026-03-09 01:24:05.229745 | 2026-03-09 01:24:05.229910 | LOOP [upload-logs : Compress console log and json output] 2026-03-09 01:24:05.293164 | localhost | skipping: Conditional result was False 2026-03-09 01:24:05.298575 | localhost | skipping: Conditional result was False 2026-03-09 01:24:05.307429 | 2026-03-09 01:24:05.307548 | LOOP [upload-logs : Upload compressed console log and json output] 2026-03-09 01:24:05.353982 | localhost | skipping: Conditional result was False 2026-03-09 01:24:05.354612 | 2026-03-09 01:24:05.357729 | localhost | skipping: Conditional result was False 2026-03-09 01:24:05.368247 | 2026-03-09 01:24:05.368359 | LOOP [upload-logs : Upload console log and json output]